Skip to content

DCVC DTOR 2024: Artificial general intel­li­gence can be as distracting as it is alluring

A world with true AGI might be immea­sur­ably better off — or worse. In any case, we think there are more immediate AI-related problems to be solved

The 2024 edition of the DCVC Deep Tech Oppor­tu­ni­ties Report, which we released in September, explains the guiding principles behind our investing and how our portfolio companies contribute to deep tech’s coun­terof­fen­sive against climate change and the other threats to prosperity and abundance. The report also pauses occa­sion­ally to consider shiny objects—technology ideas that tempt innovators and entre­pre­neurs, but ultimately distract from more urgent and practical work. What follows is the first such entry in the report, edited to reflect what’s been happening in AI since publication.

A system that is generally capable” is the definition of artificial general intel­li­gence offered by Demis Hassabis, leader of Google’s DeepMind division. Out of the box, it should be able to do pretty much any cognitive task that humans can do.” Hassabis says he would not be surprised if we saw systems nearing that kind of capability within the next decade or sooner.”

We’re tracking this area closely, and we’re aware of the rapid recent progress that models like OpenAI’s ChatGPT o3 are showing against important benchmarks. But what kind of intel­li­gence these problem-solving abilities represent is up for debate. If building an AGI system is merely a matter of gluing together enough different cognitive skills to compete with a well-rounded human — completing a math problem, inter­preting a visual scene, composing a sonnet or a melody — then the goal may be within reach soon. However, such a system would not think the way a human does, if only because it would lack our sense organs, our emotion-racked nervous systems, and our networks of social relationships.

The key to a scientific theory of our intel­li­gence lies in acknowl­edging the fact that humans are embodied, which is to say that we are living, biological creatures who are in constant interaction with the material, social, cultural and tech­no­log­ical environment,” writes Anthony Chemero, a professor of philosophy and psychology at the University of Cincinnati who’s been studying the idea of embodied cognitive science” for more than a decade. Machine under­standing, conscious­ness, sentience — all of these will likely require a funda­men­tally different approach to computing, if they can be achieved at all, Chemero argues.

Meanwhile, we face another, far more urgent task: making today’s AI systems more accountable. AI safety is a fast-growing field of inquiry and poli­cy­making, and of course we agree with its basic goals: protecting personal privacy and data security, eliminating algorithmic bias, preventing AI-assisted fraud, and the like. What worries us right now is something subtler: the possibility that AI models will be given respon­si­bility for myriad real-world decisions in the absence of robust methods and mechanisms for a) under­standing and explaining those decisions, and b) allowing humans to challenge and reverse them. Moments of humanity and everyday mercy — the insurance claims adjuster who bends the rules, the traffic cop who waives a speeding ticket — are part of what make our inter­ac­tions with bureau­cra­cies tolerable. We fear a world where small decisions about our lives are made by a web of hundreds of invisible AI systems built or hosted by giant technology companies, producing possibly unfair or even hateful effects (depending on the biases inherent in their training data), with no practical means of appeal.

To help avert such a future, we think it’s critical that every orga­ni­za­tion, from small startups to the largest corpo­ra­tions and government agencies, have the ability to build and run the machine-learning models and algorithms it needs for its operations, rather than ceding control to off-the-shelf models from the giant tech companies. Technology like that from DCVC portfolio company MosaicML, which was acquired in 2023 by another DCVC-backed company, Databricks, can help here. Databricks offers products that help developers deploy custom generative AI models quickly and easily, in their companies’ own secure envi­ron­ments, and at a fraction of the cost of other comparable services.

We’d also like to see companies building and using AI sign on to a set of guidelines such as the Blueprint for an AI Bill of Rights” proposed by the White House Office of Science and Technology Policy under President Biden. Among the principles proposed in the OSTP document were You should know how and why an outcome impacting you was determined by an automated system” and You should have access to timely human consid­er­a­tion and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impact on you.” Such protections, and many others, could and should be hard-coded into any AI system that mediates access to oppor­tu­ni­ties, resources, or services. 

In the end, making AI systems more inter­pretable, explainable, and reversible isn’t just good social policy; it’s good engineering practice that will guide the development of more effective AI models in the future. Forget AI doomerism; AGI is not the threat,” says DCVC managing partner Matt Ocko. What is the threat is a vast assortment of black-box, unap­peal­able little AI gods that codify the vindictive, opaque policies of the gas company, the cable company, the parking enforcement division, the other oligopolies that we all endure. The ability to cost-effectively validate those models and provide the tools to call them to account—that is essential for the survival of human civilization.”

Related Content