Working Towards an Advanced, Inclusive, and Socially Responsible AI Framework

Working Towards an Advanced, Inclusive, and Socially Responsible AI FrameworkThree Lessons from O’Reilly’s AI ConferenceBenjamin KinsellaBlockedUnblockFollowFollowingApr 30Roger Chen (left), CEO of Computable, and Ben Lorica (right), Chief Data Scientist at O’Reilly Media, introducing keynote speakersBecoming a DataKind volunteer has awarded me a unique opportunity to bring together my interests in data science and social impact.

For this reason, I was thrilled to be able to attend the 2019 O’Reilly AI Conference in New York City during which I was able to learn from AI practitioners, research scientists, and business strategists from around the world.

In brief, the conference showcased how AI is moving from research labs to mainstream applications, insofar that there is urgent need for an interdisciplinary dialogue across sectors.

Furthermore, listening to these presentations was a reminder of my own work with DataKind.

Volunteering with the Rockefeller Foundation and Google.

org, for example, I have observed firsthand the many challenges social organizations face regarding AI integration and deployment.

That is, it can be incredibly difficult for nonprofits to hire machine learning talent, build data-driven systems, and identify how AI can be applicable to their work.

As I attended the O’Reilly Conference and reflected on my own data experiences, I was able to better understand the importance of interdisciplinary teams and the need to close the gap between AI experts and social organizations, a goal that DataKind is committed to achieving.

As a collective, there were many takeaways at the O’Reilly AI Conference, particularly those pertaining to advanced AI systems, the need for inclusive and diverse teams, and opportunities to use AI for social good.

Below I expand on each of these takeaways and describe how they have been emulated in the field.


There is a need to develop more mature and advanced AI systems.

First, a common theme across presentations centered on the new phases of deep learning, focusing on both the limits and opportunities of AI.

For example, Dr.

Aleksander Madry, Associate Professor of Computer Science at MIT, maintained in his keynote presentation that our current AI systems are insufficiently secure, unpredictable, and often biased.


Madry calls for “AI 2.


Technology must be “much more aligned with what we humans see as significant,” Dr.

Madry expressed during his keynote.

Priya Ravindhran (right) of H2O.

ai, a driverless AI softwareMany of the presentations and booths at the conference centered around the idea of making AI more mature.

For example, Ming-Wei Chang, Research Scientist at Google, presented on the new and empirically-powerful language representation model, BERT.

Furthermore, Danielle Dean, Principal Data Scientist Lead at Microsoft, described a number of powerful tools, such as automated machine learning and its application to Azure, Microsoft’s cloud computing service.

These developments, along with many others, are pushing the field forward and uncovering untapped opportunities, delineating the ways in which AI will change social and business landscapes in the near future.

However, despite these advancements, many companies still receive immense criticism for the unintentional bias in their algorithms.

To address this problem, several presentations articulated that there is an urgent need to develop an inclusive and diverse workforce, which I further expand below.


We must all be committed to building a diverse and inclusive workforce.

A second key takeaway from the conference concerned the need to build a diverse and inclusive workforce.

For instance, Kurt Muehmel, VP of Sales Engineering at Dataiku, expressed in his keynote presentation that there’s a need to develop an organizational commitment toward ethical AI.

That is, to address our blind spots, Muehmel contends that “inclusivity and collaboration is a necessary answer”.

Development teams that are homogenous and less inclusive are more likely to build AI systems that contain unintentional biases, noted the keynote speaker.

Reshama Shaikh of Women in Machine Learning and Data Science (WiMLDS) presenting on diversity and inclusionAs a timely contribution to this keynote, Reshama Shaikh, Board Member and NYC chapter organizer of the Women in Machine Learning and Data Science (WiMLDS), spoke during the conference’s diversity networking lunch.

She maintained that mentorship programs, advocacy, and even rewriting job descriptions are just one of many examples in which organizations can commit themselves to creating diverse and inclusive teams.

Accordingly, pursuing an agenda that leverages both advanced and ethical AI raises the question: How can data be put to use in the service of humanity and what types of projects — both existing and future — can we anticipate seeing?.Below I expand on the third takeaway, how AI can be used for social good.


There is an abundance of opportunities for AI and data science to be leveraged for social good.

The third takeaway from the conference delineated the ways in which AI can be used for social good.

For example, Anna Bethke, Head of AI for Social Good at Intel Corporation, and her colleague Jack Dashwood, provided a glimpse into this field, showcasing several ways in which the AI for Good community impacts the world.

In one project, Anna and her team have deployed the TrailGuard AI, a technology that fights illegal elephant poaching in Africa by leveraging motion inference and deep neural networks.

Anna Bethke, Head of AI for Social Good, and colleague Jack Dashwood, of Intel CorporationThe above example, as well as others provided by conference presenters, including Tom Sabo of SAS, Eric Oermann of Mount Sinai Health System, and Katie Link of the Allen Institute for Brain Science, highlight how AI and data can be used to solve numerous social problems.

For example, we can observe several examples in the field, such as countering human-trafficking, promoting internet safety, protecting natural resources, improving medical outcomes, and much more.

Furthermore, included in Beth’s presentation was Carlos Goméz’ visualization, depicting how data science and AI can be leveraged for good.

Together with Suzanne Axtell (second to right) and team, the organizers of the conference.

To conclude, O’Reilly’s AI Conference highlighted both the technical advancements and an urgent call-to-action for the field to be both inclusive and meaningful in their contributions toward social good.

As I reflect on these takeaways, I have been able to make deeper connections between my own data science journey and the importance of diverse viewpoints, skill sets, and humility, such as those outlined in DataKind’s values.

In sum, we must all strive to build an open and welcoming culture that supports impactful collaborations, working to tackle the biggest challenges in data science and AI.

Interested readers can view the entire list of speakers and presentations online.

Original article to be published on DataKind’s website.


. More details

Leave a Reply