By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Investment
Investments
Investment

Credo AI: With Great Power Comes Great Responsibility

Today, we’re excited to announce our investment in Credo AI, the first AI governance platform that enables companies to build and deploy responsible AI systems at scale. As our world becomes more digital we are increasingly aware of the power of artificial intelligence to automate and augment our daily lives. But we are also aware of the potential for unintended consequences if these systems are not purposefully designed. Credo AI enables trust, integrity, and ethical standards to be designed and deployed across any machine learning or AI system in production today. The Credo AI solution is an industry first and comes at a much needed time in the evolution and adoption of AI.

We asked the founders, Navrina Singh and Eli Chen, to share their vision for how Credo AI can empower organizations to build trust and transparency in their AI systems in our founder Q&A:

Where did you guys grow up? When did you realize you would have a future in technology?

Navrina: I grew up in a small town in India and the culture in my hometown, for better or worse, was very traditional. As a young woman many decisions were made for me and my career choices were limited. I was grateful that my parents wanted more for me, and encouraged my creativity and experimentation. They bought me a computer at a young age and gave me access to the Internet, which opened a world of possibilities for me. The more I learned online, the more empowered I felt to pursue my dreams and depart from the traditional stereotype. Technology gave me a chance to break from societal restraints and move to the US at the age of 19 to study engineering. I’ve been in the tech industry ever since.

Eli: I grew up in Taiwan and was also raised in a conservative culture. Looking back, I was never really excited to follow the traditional coursework. My father was an engineer and I remember getting my first x86 and learning how to code video games in BASIC. As a teenager I found a way to reverse engineer copyright protected video games. I’ve always had a knack for breaking down someone else's software bit by bit and seeing what makes it tick. Everyone assumes that software is perfect but once you look underneath the covers you realize that all software is written by humans, and humans are far from perfect.

How did you first discover machine learning and AI? When did you first understand its power and its limitations?

Eli: I spent much of my engineering career in tech companies like Netflix and Twitter who were early adopters of machine learning. I remember back in 2009, Netflix was shifting from a DVD business to a streaming business and put a $1 million prize on the line for developers who could build an algorithm to improve their recommendation engine by 10 percent. It was quite a success initially and it was amazing to see new machine learning models substantially improve the many millions of recommendations Netflix was making every day. But it was also one of the first times consumers learned that their gender, sexual orientation, and racial bias could be reverse engineered and Netflix ultimately shut down the program due to consumer concerns over data privacy.

As consumers, we all want the power of machine learning and AI, but we also want our data to be used responsibly, ethically, and with our consent.

We want those who use our data to do so with a moral compass.

The Credo AI Team at their company off-site (2021).

Most tech products are not viewed through the lens of right and wrong. When did you first start thinking about responsible and ethical AI?  

Navrina: AI is playing such an important role in automating and augmenting what humans do every day, and it is hard to overstate its impact in our world.

It helps determine how news and information is spread via social media, it serves the videos our kids watch every day, and it decides who gets to buy something with a credit card millions of times a day.

None of us can live without AI, but you don’t need to go very far to recognize that these systems also have many unintended consequences. As one example, many tech companies have had public issues with their computer vision and speech recognition technology, creating the impression that these technologies are racially biased. We’ve also seen the hiring algorithms of large employers unintentionally discriminate based on gender - sometimes the machines are doing all the right things, but are providing the wrong answers when we factor in fairness and ethics. Each of these experiences has reminded me of the saying “with great power comes great responsibility”. We all know that an AI-powered world can be great for all, but only if it is built to be inclusive of everyone and is designed for all.

The Credo AI team prepares for launch (2021).

What was the inspiration for Credo AI? Why does the industry need to build responsible AI?

Navrina: The inspiration for Credo AI came in 2018 - I was at Microsoft building conversational AI systems and began to fully understand their scale and scope. Ultimately a machine learning system is built to automate decisions that humans have previously made on their own. The problem is that algorithms are created in isolation and analyzed in a vacuum, but once they are deployed in the real world they are a part of the fabric of our society which is much more complex. I think everyone believes that computers are “as good” if not better than a human in many decisions but I think that’s not quite the whole story. Most machine learning systems are designed to optimize around one variable, and most humans try to make a decision by factoring in many variables, including the greater good. Credo AI makes it possible for engineers to build AI systems that incorporate numerous inputs and outputs, including ethical and responsible standards.

Eli: If we’re being honest, machine learning has been deployed rapidly and most traditional systems are one dimensional and are still very rudimentary in their design. If you are building an ML system and you want to optimize for revenue, your algorithm will drive decisions for new revenue above all else. If you are optimizing for clicks, likes, or shares, the algorithm will serve you content that gives you these exact results. Credo AI makes it possible to build a trustworthy AI system - it gives organizations a chance to build AI that incorporates multiple stakeholders, satisfy government regulatory requirements, and incorporate a company’s culture and values from day one in the design and delivery of their AI systems. We went into beta earlier this year and now have many of the largest companies in the world using our software.

What is the long term vision for the company?

Navrina: Our vision is to be embedded in the fabric of every enterprise that is building machine learning, and to help companies deliver on the promise of responsible AI. This is a very personal journey for me - I’ve come a long way from rural India thanks to the tech industry and am grateful for the opportunity it has afforded me. As I look at my children’s generation, I believe that AI has the same potential to empower our next generation to unlock even more opportunity for those most in need. AI can expand access to advanced healthcare services, provide personalized learning pathways in every school, and democratize access to financial services for those most in need. The promise and the pitfalls of AI will shape our future and it is our responsibility to nurture its benefits and ensure that AI is always in service to humanity. We hope you will join us on this journey!