I can’t stop thinking about a new concept that AI applications could benefit from. I’m calling it internal monologue capture. When Daniel Miessler and I were hanging out a few months ago, I told him a huge level-up that AI applications need is the internal monologue from experts. I’m pumped to finally write a blog about it.
More …
Claude 3.5 was recently released, and it’s a clear step up from any other model currently available. Not only is it more advanced, but it’s also incredibly fast and cost-effective. This combination of features makes it perfect for a wide range of applications.
More …
Yann LeCun is making the same mistake Marc Andreesen makes about AI risk. They aren’t considering how powerful a system can be which incorporates generative AI with other code, tools, and features. LLMs can’t cause massively bad outcomes, but it’s not absurd to think human-directed LLM applications with powerful tools could cause large-scale harm.
More …
There’s been a lot of discussion lately about how AI struggles with long-running tasks. And it makes sense when you think about it. These large language models can generate a ton of text in a few seconds. But then what? They’ve put out all these words or code and don’t really have a clear direction on what to do next.
More …
OpenAI just made a big move in the AI space with the release of GPT-4o (“o” stands for “omni”). This new model is crazy because it is a single model that can process not just text, but also audio and images. And it’s going to be accessible to free users (or at least the text version).
More …