Claude Now Lets Users Build AI Apps Directly in Chat Without
June 28, 2025 | by Olivia Sharp

Claude’s No-Code App Builder: Why This Shift Matters
Every few months a generative-AI announcement lands that feels less like an incremental update and more like a tectonic shift. Anthropic’s latest release falls squarely in the latter camp. Claude—already known for its context length and guarded approach to safety—now lets anyone build shareable AI-powered applications directly inside the chat, no code or API gymnastics required. The feature, an evolution of Claude’s Artifacts pane, is currently in open beta across Free, Pro, and Max plans. Users simply describe what they want, watch Claude assemble it, and publish with a single click.
From Static Artifacts to Living Apps
I’ve been following Artifacts since its quiet rollout last year. Originally, it was a side window where Claude tucked away generated documents, diagrams, or snippets of code so you could keep chatting while glancing at results. Handy, but largely passive.
The new workflow flips that model on its head. Now, when you ask Claude to “build me a flash-card tutor” or “spin up a mini chat bot that summarises PDFs,” the assistant not only writes the code—it executes it in an interactive container that lives alongside your conversation. You can poke at the interface, refine the prompt, or dive into the source. Hit Share and you get a link that anyone with a Claude account can open and use. Their usage bills against their quota, not yours, which elegantly sidesteps the typical “viral success equals unexpected cloud bill” nightmare.
Why This Matters Beyond the Hype
1. Democratising agent orchestration. Over the past year we’ve seen a proliferation of “agents” and workflow tools that connect one LLM call to the next. They’re powerful but still favour those comfortable with YAML, callbacks, or at least a low-code builder. Claude’s update removes that last mile of friction—if you can explain a task in plain language, you can deploy it.
2. A new distribution channel. By embedding publishing directly into the chat UI, Anthropic is edging toward an AI app store model without actually saying “app store.” Discoverability, remixing, and usage-based billing are baked into the platform. Expect to see niche micro-tools spread the way Notion templates or Figma kits currently do.
3. Lower total cost of experimentation. In corporate settings, I routinely see promising prototypes stall because teams hit procurement or DevOps bottlenecks. Here, the legal/security review still matters (more on that below), but infrastructure overhead effectively drops to zero. That accelerates iteration cycles and broadens who can participate in solution design.
Real-World Scenarios Already Emerging
“We built a self-grading language-learning game in 40 minutes, shipped it to students the same afternoon, and watched them compete on leaderboards by evening.” — Early-access teacher in São Paulo
Early testers are churning out:
- Personalised tutoring bots that adapt difficulty based on student performance.
- Interactive data rooms where business users upload spreadsheets and interrogate trends in natural language.
- Lightweight games whose NPCs remember player choices across sessions.
- One-page product brief generators for UX teams, retained as living docs.
The through-line is context retention. Because each app can call back to Claude’s API, it preserves state, learns from user input, and feels less like a single-shot prompt hack and more like a product.
Strategic Considerations for Teams
1. Governance first, excitement second. Yes, the friction is gone—but your information-security obligations remain. Establish a lightweight review process so employees know when customer data is allowed inside Claude apps and what final outputs must be scrubbed.
2. Think in portfolios, not one-offs. Treat these micro-apps the way you treat Slack workflows or Excel macros. Catalogue them, measure adoption, deprecate what stalls, and double-down on what sticks. The build cost is low, but cognitive load on colleagues is real.
3. Leverage the remix culture. Because published apps are forkable, you can seed a template—say, a competitive-analysis assistant—and invite sales reps to tailor tone and data sources without touching code. The version tree is managed automatically.
4. Budget for usage, not development. Finance teams still need forecasts. Shift modelling from engineering hours to API-call volume. Anthropic’s usage-based billing simplifies chargebacks, but you’ll want dashboards that surface “heavy hitters” early.
Ethical & Safety Implications
Anthropic is known for its Constitutional AI approach, and each generated app inherits those guardrails. Still, the ability to chain prompts raises questions:
- Prompt injection cascades. If a downstream user modifies the UI to slip malicious instructions, does it bypass your original safeguards? Build layered validation into the workflow.
- Attribution haze. When an app outputs code or content, ensure authorship and licensing are explicit. Claude may draw from training data you didn’t intend to redistribute.
- Long-term autonomy. The temptation to hand off entire decision loops is strong. Keep humans in the review path for consequential actions—loans, diagnoses, HR decisions—until you’re confident in monitoring and audit trails.
Practical Getting-Started Path
Step 1: Identify a repetitive internal task that currently lives in a spreadsheet or email thread.
Step 2: Draft a prompt describing the desired workflow, results format, and any UI elements (tables, buttons, file uploads).
Step 3: Ask Claude to “turn this into an interactive artifact.” Iterate until usability feels right.
Step 4: Share with a small pilot group via link. Capture usage metrics and qualitative feedback.
Step 5: Formalise governance/publishing guidelines based on lessons learned.
The Road Ahead
This release nudges the industry one step closer to an era where interfaces are as fluid as ideas. We’ve spent decades translating intent into code through middleware, frameworks, and painstaking deployments. Claude’s no-code builder collapses that distance—not by dumbing down development, but by conversationally compiling intent into runnable software.
The question is no longer, “Can we build it?” but rather, “How thoughtfully can we wield it?” For teams willing to experiment responsibly, the payoff is immense: faster innovation loops, lower technical debt, and a workforce that feels empowered rather than sidelined by AI.
I’ll be tracking novel use cases—especially where the line blurs between artifact and autonomous agent—and will share field notes in future posts. For now, open Claude, articulate a problem, and watch a pocket-sized solution materialise. Just don’t forget to put guardrails around the thrill.

RELATED POSTS
View all