Defining Real AI Risks

Yann LeCun is making the same mistake Marc Andreesen makes about AI risk. They aren’t considering how powerful a system can be which incorporates generative AI with other code, tools, and features. LLMs can’t cause massively bad outcomes, but it’s not absurd to think human-directed LLM applications with powerful tools could cause large-scale harm.

More …

Empowering Long-Running AI Agents with Timers

There’s been a lot of discussion lately about how AI struggles with long-running tasks. And it makes sense when you think about it. These large language models can generate a ton of text in a few seconds. But then what? They’ve put out all these words or code and don’t really have a clear direction on what to do next.

More …

GPT-4o: Actually Good Multimodal AI

OpenAI just made a big move in the AI space with the release of GPT-4o (“o” stands for “omni”). This new model is crazy because it is a single model that can process not just text, but also audio and images. And it’s going to be accessible to free users (or at least the text version).

More …

The Three Categories of AI Agent Auth

As I’ve been discussing AI agent authentication with some brilliant people in San Fran this week, it’s become clear to me that there will likely be various solutions based on different use cases and risk tolerance levels. By imagining what a mature AI agent ecosystem might look like, I’ve come to the conclusion that the industry will essentially be divided into three main categories.

More …

The Meta AI Ray-Bans Are Awesome

My wife got me the new Meta AI Ray-Bans as an early birthday present, and I have to say, they are seriously impressive. I’ve messed with some of the other AI devices out there, but these are on a whole different level. I’ll explain why by elaborating on each use case:

More …