If you'd like to join the conversation about responsible technology, I'm hosting a Responsible AI Founders Panel on Dec 14th with Samsung Next. 👉🏼You can RSVP here.
The firing of Google's lead responsible AI researcher Timnit Gebru last week sent shockwaves through the responsible technology community. I'm not going to go into depth about the series of events since it's already well documented here and here. But the sad realization for me was that many responsible technologists (AI researchers, data scientists, product managers, designers) face a conundrum between pushing for real, meaningful change in their organization and society at large or toeing the line maintaining the corporate status quo. Timnit's words in her email to management stung me personally: "your life gets worse when you start advocating for underrepresented people, you start making the other leaders upset when they don’t want to give you good ratings during calibration". There is a catch-22 in hiring diverse talent: We are often asked to offer new perspectives and interrogate the status quo, but when we do, our advocacy is often met with retaliation. We cannot truly push for systemic change until leadership at the top is willing to hear dissenting opinions, execute on new visions, and make room for new blood.
The crux of the disagreement between Timnit and Google's Head of AI Jeff Dean was about Google's peer review process for journal publications. Jeff Dean asserted that Timnit's recent paper on algorithmic bias which questioned Google's own language models did not meet the bar for publication because it lacked citations to recent studies. However, as VentureBeat's AI Weekly noted "...from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others."
And so this brings into question the fairness and fidelity of the private sector's review of journal publications. Most universities and research centers review papers from an intellectual property perspective and citations of replicated studies, not from a business interest perspective. But clearly it was not in Google’s best interest to publish an academic paper that questions its own Google Search business (which has brought in $26.3B revenue since Q3 of 2020 alone). And so Google's censorship of Timnit's paper threatens AI innovation in the private sector and I worry that there will be a cooling effect with academic researchers leaving the private sector to do more unfettered research that's not tied to topline revenue implications.
The Rise of the Firmless Professional
Timnit's firing leads me to wonder how does one responsibly innovate within a company deeply entrenched in maintaining its existing power structures and status quo?
Like an established immune system, once a company's culture is set it is near impossible to introduce new antigens without being attacked. New agents of change—often diverse talent with conviction and independent thought—are attacked by antibodies of the establishment. For true innovation to happen, I believe we need to have a league of "firmless professionals"— a free market of free entities employed across different firms to license their ideas, contract on projects, and conduct funded research. As Spero Ventures investor Sarah Eshelman wrote in a recent Medium post, "...today, we live in a world of firmless professionals. Firmless professionals work across firms as much as they work within their own firm. But their success is defined by the interactions that take place outside the firm."
A firmless professional is different from a freelancer because they have dedicated their life’s work to one topic, advancing their writing and research across different firms. They are not a "hired gun" like freelancers to do discrete tasks. They are highly skilled workers with proprietary expertise and networks that can be lent to a firm or corporation for a certain period of time at a price. This way the firmless professional can maintain their independent thinking without distortion of corporate interest or assimilation to a corporate identity. Startups like Huddle are making it easier for highly skilled workers to lend their skills to founding teams for a blend of equity and cash. I wonder if we can create a similar model for researchers and responsible technologists to join a marketplace that provides them compensation and funding for independent research? Instead of working for Google, your project on algorithmic bias research can be funded by Google, bypassing their review process and corporate bureaucracy completely.
True Responsible Innovation lies in Startups
I'm hopeful that true innovation in the responsible AI space will be coming from startups. Fiddler is a startup pioneering explainable AI. Parity AI founded by my good friend Rumman Chowdhury is creating an algorithmic audit system for detecting business risk. Arthur AI is detecting model shifts in banks and healthcare systems. Soon we will be able to understand why someone was rejected from a bank loan or not hired in an AI model. These are emerging startups that are paving the way in the responsible AI space and VCs are starting to take notice by forming theses in this space such as those at Omidyar Network, Lux Capital, Plug and Play, and Boldstart Ventures. I'm optimistic that venture capital can push for innovation in the responsible AI space in ways that big tech companies cannot.
In a recent conversation with Lofred Madzou, AI Project Lead at the World Economic Forum, he mentioned freedom and size are inversely related. The larger the size of a corporation or audience’s reach, the more you need to remain neutral as to not estrange constituents, hence why large monarchies and corporations are apolitical. However, working as a responsible technologist inherently means you must take a side to help protect marginalized communities and defend dehumanizing tech practices. Therefore, responsible innovation will not be coming from large corporations but rather small startups that can still take a side and stand for something.
"Those who stand for nothing fall for anything." — Alexander Hamilton
Stand for something,
—bosefina
It's not likely to be easy to get good quality out of these responsible technologists. It's far too easy to end up in your own echo chambers when you're free to go your own way.
To Jeff Dean's point, Timnit's paper for example, went into the issue of how training a model might cost 5 cars worth of CO2 while ignoring benefits created by such models. That it tried to suggest that training such models belongs to big companies, which ignores how such models usually will end up available to smaller companies and consumers in way or the other.
There's a certain basic maturity of thought that isn't there in her work. It's better suited to creating misunderstandings than it is to create actual benefit.
Letting her go off on her own this early in her career will more likely stunt her development as a person, as well as reduce her net contribution to the community. She needed to accept the negative (and correct) feedback and grow. But instead she made it about racism and didn't accept responsibility for her failings.
After all... Nobody else who worked on that paper ended up fired. Because nobody else behaved in such an immature fashion.
Supporting her now is just well intentioned sabotage.