All things AI
-
Introducing the Glazing Score 🍩 30 Apr 2025
-
Simplify Interviews with guided-capture 26 Mar 2025
-
Shift: AI-Powered Hacking 04 Jan 2025
-
AI-Toolbox: Who's building it? 03 Oct 2024
-
AI-Toolbox: Who's building it? 03 Oct 2024
-
Truth, Progress, and AI 22 Aug 2024
-
LLMs: Beyond Truth Telling 22 Aug 2024
-
LLMs: Beyond Truth Plateaus 22 Aug 2024
-
LLMs: Beyond Factual Accuracy 22 Aug 2024
-
The Data Wall, Agents, and Planning-Based Evals 22 Aug 2024
-
Beyond the Data Wall 22 Aug 2024
-
Internal Monologue Capture 01 Aug 2024
-
Meta Unveils Llama3.1: Revolutionizing Large Language Models 24 Jul 2024
-
Introducing Llama3.1: Meta's New 40B Model 24 Jul 2024
-
Exploring Prompt Injection: A Double-Edged Sword for AI Agents 24 Jul 2024
-
Exploring JSON Schema 24 Jul 2024
-
AI's New Hypothesis Engine: Crowdsource Science 24 Jul 2024
-
AI's Crowd-Powered Hypothesis Engine: The Next Game-Changer? 24 Jul 2024
-
OpenAI's Groundbreaking GPT-4O Release 12 Jul 2024
-
AI-Powered Style Writing 12 Jul 2024
-
Unleashing Claude 3.5 Sonnet As A Hacker 29 Jun 2024
-
Digital Assistants: Balancing Knowledge 28 Jun 2024
-
AI's Humorous Leap Forward 28 Jun 2024
-
Defining Real AI Risks 19 May 2024
-
Leveraging Narrow AI Focus 16 May 2024
-
Empowering Long-Running AI Agents with Timers 16 May 2024
-
GPT-4o: Actually Good Multimodal AI 14 May 2024
-
The Three Categories of AI Agent Auth 08 May 2024
-
The Meta AI Ray-Bans Are Awesome 08 May 2024
-
Unlocking LLM Potential with Expert Monologues 05 May 2024
-
assumptions_made 04 May 2024
-
Unveiling AI's Hidden Assumptions 03 May 2024
-
Improving AI with 'assumptions_made' Standard 03 May 2024
-
Rabbit r1: Innovative Device, Security Concerns 26 Apr 2024
-
Incremental Learning LLM Pattern 24 Apr 2024
-
All About Hackbots: AI Agents That Hack 21 Feb 2024
-
From Concept to Capability: Required Security Changes for Secure AI Agents 05 Feb 2024
-
AI Hacking Agents Will Outperform Humans 05 Feb 2024
-
Adapting to Advancements 29 Nov 2023
-
AI Hacking Agents Will Outperform Humans 08 Nov 2023
-
Beyond the Blog: More AI and Hacking Content 04 Nov 2023
-
AI Security Has Serious Terminology Issues 16 Oct 2023
-
Jailbreaking Humans vs Jailbreaking LLMs 11 Oct 2023
-
vim + llm = 🔥 18 Sep 2023
-
Yes. LLMs can create convincingly human output. 30 Aug 2023
-
Announcing PIPE: The Prompt Injection Primer 25 Aug 2023
-
AI Creativity: Can LLMs Create New Things? 31 Jul 2023
-
From Theory to Reality: Explaining the Best Prompt Injection Proof of Concept 19 May 2023
-
Prompt Injection Attacks and Mitigations 19 Apr 2023
-
Turbocharge ChatGPT With A Metaprompter 17 Apr 2023
-
Hacking with ChatGPT: Ideal Tasks and Use-Cases 21 Feb 2023
-
How to get setup to create awesome AI art 29 Sep 2022