About

Who Am I?

Hi! I'm Falaah. I'm a Engineer/Scientist by training and an Artist by nature, broadly interested in Reliable and Responsible AI. Towards this end, I conduct fundamental research on Robust Deep Learning, Fairness in Machine Learning and AI Ethics and create scientific comics (and other technical art) to disseminate the nuances of this work in a way that is more democratic and accessible to the general public. I'm currently an Artist-in-Residence at the Center for Responsible AI at NYU . I'm extremely lucky to get to do two things I absolutely love to do: fundamental research and creating scientific comics!

At NYU, I run the 'Data, Responsibly' Comic series , along with Prof Julia Stoyanovich . We just released our second volume 'Fairness and Friends' on bias in algorithmic systems, fairness in machine learning and broader doctrines of equality of opportunity and justice from political philosophy.

Volume 1, titled 'Mirror, Mirror', dealt with questions such as What work are we funding? Who are we building models for? What problems are we solving? We delved into Digital accessibility and the impact of poorly designed systems on marginalized demographics and tied together these insights with broader issues in the Machine Learning landscape including problems with operationalizing fairness, misguided incentive structures in scholarship, exclusionary discourse and questions of culpability when things go wrong.

I also run the 'Superheroes of Deep Learning' comic series with Prof Zack Lipton, which documents the thrilling tales and heroic feats of ML's larger-than-life champions.

I recently finished an Artist Residency at the Montreal AI Ethics Institute. My visual essay, 'Decoded Reality', is an artistic exploration of the power dynamics that shape the design, development and deployment of ML systems. We present visual interpretations of how algorithmic interventions manifest in society, with the hope of provoking the designers of these systems to think critically about the socio-political underpinnings of each step of the engineering process.

Previously, I worked as a Research Engineer at Dell EMC, Bangalore where I designed and built data-driven models for Identity and Access Management (IAM). My research focused on behavior-based Authentication, online learning for CAPTCHA design and (Graph) Signal Processing for Dynamic Threat Modelling.

My work in the industry showed me firsthand the pressing challenges of building 'production-ready' models. Contrary to the media narrative around AI, we are yet to have figured out how to build models that are robust, trustworthy and designed to thrive in the wild. These challenges have informed my interest to explore the foundations of generalization, robustness and fairness, and to translate these insights into algorithms with provable guarantees. I’m also interested in critically assessing how AI impacts, and is in turn impacted by, the underlying social setting in which it was formulated.

Curriculum VitaeGoogle Scholar

News

News

August 2021: New Superheroes of Deep Learning comic 'Machine Learning for Healthcare' is out now!

May-June 2021: We just released a brand new, public-facing comic series, titled 'We are AI'! It's a 5-volume primer on AI, blending the social, the legal and the technical, for anyone and everyone, and it accompanies R/AI's new public education course of the same name.

April 2021: Giving an invited talk titled "It's funny because it's true: confronting ML catechisms" at the 'Rethinking ML Papers' Workshop @ICLR 2021! Video recording here . (My talk starts at ~2:48:00)

April 2021: I'll be starting my PhD at NYU's Center for Data Science this Fall!

April 2021: 'Fairness and Friends' has been accepted as an exhibit to the 'Rethinking ML Papers' Workshop @ICLR 2021! Video explainer here.

March 2021: Hosting 'Decoded Reality' - a collaborative, community brainstorm session about the role of Power in the creation of Responsible AI, at MozFest 2021 , based on my visual essay of the same name!

March 2021: Presenting 'Fairness and Friends' - a translation tutorial that bridges scholarship from political philosophy and fair-ML - with Julia Stoyanovich and Eleni Manis, at ACM FAccT 2021! Recording is available here.

Feb 2021: Data, Responsibly Comics Vol 2: 'Fairness and Friends' is out now!

Jan 2021: RDS Comics, Vol 1: 'Mirror, Mirror' has been translated into French!!!

Dec 2020: Facilitating the MAIEI x RAIN-Africa collaboration 'Perspectives on the future of Responsible AI in Africa' workshop.

Dec 2020: The Spanish edition of RDS Comics, Volume 1: 'Mirror, Mirror' is out now!!!

Nov 2020: Facilitating the 'Privacy in AI' Workshop , by MAIEI and the AI4Good Lab

Nov 2020: Excited to be speaking at the 'Ethics in AI Panel' by the McGill AI Society

Nov 2020: Giving an invited talk on 'Ethics in AI', based off of Decoded Reality, at the TechAide Montreal AI4Good Conference + Hackathon

Nov 2020: Speaking about our 'Data, Responsibly' Comic books at the Rutgers IIPL Algorithmic Justice Webinar, with Julia Stoyanovich and Ellen Goodman!

Oct 2020: 'Mirror, Mirror' and 'Decoded Reality' have been accepted to the Resistance AI Workshop at NeurIPS 2020!

Oct 2020: Started the "Superheroes of Deep Learning" comic series, with Zack Lipton! Volume 1: 'Machine Learning Yearning' is out now!

My Work

Research

Recent interest in codifying fairness in Automated Decision Systems (ADS) has resulted in a wide range of formulations of what it means for an algorithmic system to be fair. Most of these propositions are inspired by, but inadequately grounded in, political philosophy scholarship. This paper aims to correct that deficit. We introduce a taxonomy of fairness ideals using doctrines of Equality of Opportunity (EOP) from political philosophy, clarifying their conceptions in philosophy and the proposed codification in fair machine learning. We arrange these fairness ideals onto an EOP spectrum, which serves as a useful frame to guide the design of a fair ADS in a given context. We use our fairness-as-EOP framework to re-interpret the impossibility results from a philosophical perspective, as the in-compatibility between different value systems, and demonstrate the utility of the framework with several real-world and hypothetical examples. Through our EOP-framework we hope to answer what it means for an ADS to be fair from a moral and political philosophy standpoint, and to pave the way for similar scholarship from ethics and legal experts.

Paper: Falaah Arif Khan, Eleni Manis and Julia Stoyanovich. "Fairness as Equality of Opportunity: Normative Guidance from Political Philosophy"

Comic book: Falaah Arif Khan, Eleni Manis and Julia Stoyanovich. “Fairness and Friends”. Data, Responsibly Comics, Volume 2 (2021)

Video: Translation Tutorial: Fairness and Friends, ACM FAccT 2021

The fundamental research problem was to investigate the efficacy of a novel “who I am/how I behave” authentication paradigm. Conventional authentication works on a “what I know” (username/password) or “what I have” (device) model. Our system would study the user’s behavior while typing his/her username and use the activity profile as the key against which access was granted. This eliminated the need for the user to remember a password or have access to a registered device. Conversely, even if a password is cracked or a device is stolen, the bad actor would not be able to penetrate the system because his behavior would intrinsically differ from that of the genuine user.

Paper: Arif Khan F., Kunhambu S., G K.C. (2019) Behavioral Biometrics and Machine Learning to Secure Website Logins

US Patent: Arif Khan, Falaah, Kunhambu, Sajin and Chakravarthy G, K. Behavioral Biometrics and Machine Learning to secure Website Logins. US Patent 16/257650, filed January 25, 2019

CAPTCHAs, short for Complete Automated Public Turing Tests to tell Computers and Humans Apart, have been around since 2003 as the simplest human-user identification test. They can be understood as Reverse Turing Tests because in solving a CAPTCHA challenge it is a human subject that is appearing to prove his/her human-ness to a computer program.

Over the years we have seen CAPTCHA challenges evolve from being a string of characters for the user to decipher, to be an image selection challenge, to being as simple as ticking a checkbox. As each new CAPTCHA scheme hits the market, it is inevitably followed with research on new techniques to break these challenges. Engineers must then go back to the drawing board and design a new and more secure CAPTCHA scheme, which, upon deployment and subsequent use, is again, inadvertently subject to adversarial scrutiny. This arduous cycle of designing, breaking and then redesigning to strengthen against subsequent breaking, has become the de-facto lifecycle of a secure CAPTCHA scheme. This beckons the question; Are our CAPTCHAs truly “Completely Automated”? Is the labor involved in designing each new secure scheme outweighed by the speed with which a suitable adversary can be designed? Is the fantasy of creating a truly automated reverse Turing test dead?

Reminding ourselves of why we count CAPTCHAs as such an essential tool in our security toolbox, we characterize CAPTCHAs in a robustness-user experience-feasibility trichotomy. With such a characterization, we introduce a novel framework that leverages Adversarial Learning and Human-in-the-Loop, Bayesian Inference to design CAPTCHAs schemes that are truly automated. We apply our framework to character CAPTCHAs and show that it does in fact generate a scheme that steadily moves closer to our design objectives of maximizing robustness while maintaining user experience and minimizing allocated resources, without requiring manual redesigning.

US Patent: Arif Khan, Falaah and Sharma, Hari Surender. Framework to Design Completely Automated Reverse Turing Tests. US Patent 16/828520, filed March 24, 2020 and US Patent (Provisional) 62/979500, filed February 21, 2020

Threat modelling is the process of identifying vulnerabilities in an application. The standard practice of threat modelling today involves drawing out the architecture of the product and then looking at the structure and nature of calls being made and determining which components could be vulnerable to which kinds of attacks.

Threat modelling is an extremely important step in the software development lifecycle, but emerging practice shows that teams usually only construct and evaluate the threat model before deploying the application. Industrial offerings also cater to this approach, by designing tools that generate static models, suitable for one-time reference. The major drawback in this approach is that a software is not a static entity and is subject to dynamic changes in form of incremental feature enhancements and routine re-design for optimization. Threat modelling, hence, should also be imparted the same dynamism and our work attempts to enable this.

Application logs are used to model the product as a weighted directed graph, where vertices are code elements and edges indicate function calls between elements. Unsupervised learning models are used to set edge weights as indicators of vulnerability to a specific attack. Graph filters are then created and nodes that pass through the filter form the vulnerable subgraph. Superimposing all the vulnerable subgraphs with respect to the different attacks gives rise to a threat model, which is dynamic in nature and evolves as the product grows.

Stuff

Articles, Talks and More!

March 17, 2021 | Interview

Interview with Hayat Life

I sat down with the folks at Hayat Life to talk about my ML comics - what inspired me to start making them, where I envision them going, and what to expect next!

November 11, 2020 | Interview

RIIPL Algorithmic Justice Webinar Series

The amazing Julia Stoyanovich and I sat down with Ellen Goodman, from the Rutgers Institute for Information Policy and Law, to discuss the comedic treatment of AI bias, normativity and exclusion, in the context of our 'Data, Responsibly' Comic books!

November, 2020 | Visual Essay

Decoded Reality

Decoded Reality is a visual essay on the power dynamics that shape the design, development and deployment of ML systems. We present artistic interpretations of how algorithmic interventions manifest in society in the hope of provoking the designers of these systems to think critically about the socio-political underpinnings of each step of the engineering process.

September 30, 2020 | Interview

MetroLab "Innovation of the Month" Feature

"Mirror, Mirror" was featured as the MetroLab Network+ Government Technology "Innovation of the Month". In this interview we discuss the origins of the project, our creative process and the future of Data, Responsibly Comics!

September 15, 2020 | Article (Satire)

Hope Returns to the Machine Learning Universe

According to witnesses, Earth's been visited by the *Superheroes of Deep Learning*. What do they want? What powers do they possess? Will they fight for good or for evil? Read to learn more!.

June 11th, 2020 | Interview

Interview with AI Hub

I sat down with the folks at AIHub to chat about my research and art. We talk (meta-)security, scientific comics and demystifying the hype around AI.

January 4, 2020 | Article

Deep Learning Perspectives from Death Note: Another Approximately Inimitable Exegesis

Masked under a binge-worthy anime lies an adept critique of the ongoing deep learning craze in the industry. Here’s my commentary on the technical symbols in Death Note.

February 20, 2020 | Talk

The Impossibility of Productizable AI: Problems and Potential Solutions

In my talk at the Sparks Tech Forum at Dell, Bangalore, I present a social and technical perspective on the most pressing problems in Machine Learning today, the sources of these problems and some potential solutions.

Slides
July 11th, 2020 | Article (Satire)

What is Meta-Security?

In this seminal essay, I explain the hottest up and coming sub-field of Machine Learning - Meta-Security!

Get in Touch

Contact

Get in touch if you want to collaborate on an interesting project, want to commission some custom artwork, or simply want to discuss something wonderfully esoteric!