AWS UG Hong Kong x AgentCon Hong Kong
Details
We’re excited to partner with AgentCon Hong Kong, the global developer conference focused on AI Agents and autonomous AI! Our community will be hosting a full day of workshops and talks alongside the AgentCon Hong Kong program.
Below is the full agenda for our sessions.
Come learn, build, and explore the future of AI Agents with us!
Activities:
- AI IDE - Kiro Hands-on Workshop x AgentCon Hong Kong
- Building Multi-Agent Systems in Finance with Strands Agents and Bedrock AgentCore on AWS.
- Governing AI Agents in Production: From Experiment to Engineering System
- Empower Team Wide Vibe Coding with LLM Gateway and Security-First MCPs
- What can go wrong when you ask your agent to go naughty?
AI IDE - Kiro Hands-on Workshop x AgentCon Hong Kong
Haowen Huang - Senior Developer Advocate, Amazon Web Services
Jacky Wu - Sr. Solutions Architect, Amazon Web Services
Time: 10:40 am
Generative AI tools have democratized software development, enabling both developers and non-technical users to build applications efficiently. However, these tools struggle with enterprise-level challenges: inadequate project planning, unstructured workflows, poor enterprise integration, and inconsistent code quality.
Kiro, AWS's AI IDE tool, addresses these gaps by integrating specification-driven development, structural design, test validation, and automated execution into a standardized, traceable workflow.
At re:Invent 2025, AWS launched “Kiro Powers”—extensible capability packages that inject domain-specific expertise into Kiro AI agents. Each Power provides:
- Domain guidance via steering files (POWER.md)
- Tool integration (MCP services for databases, payments, deployment)
- Automated Hooks for event-driven actions (API testing, infrastructure code generation)
Unlike traditional coding assistants, Kiro + Powers covers the entire development lifecycle—from requirements to deployment—while reducing token costs and ensuring consistent code quality.
We'll demonstrate Kiro's capabilities by building a game using Specs, Powers, and Hooks, showcasing its enterprise-grade advantages in planning, automation, and quality assurance.
First come, first served if the number of attendees exceeds capacity!
To join this workshop, you need to register at AWS AI User Group Hong Kong
https://www.meetup.com/aws-ai-user-group-hong-kong/events/313334952/
Building Multi-Agent Systems in Finance with Strands Agents and Bedrock AgentCore on AWS.
Haowen Huang - Senior Developer Advocate, Amazon Web Services
Jacky Wu - Sr. Solutions Architect, Amazon Web Services
Time: 1:15 pm
This session explores practical implementations of AI agents for financial services on AWS. Discover how to design multi-agent systems with the Strands Agents framework and Bedrock AgentCore for complex financial workflows—from quantitative backtesting to AI fund manager.
You'll learn multi-agent patterns, and agent orchestration techniques. Walk away with concrete architectural patterns and implementation strategies for building production-ready AI agents in finance.
Governing AI Agents in Production: From Experiment to Engineering System
Mike Ng - ex-AWS Community Builder
Time: 2:30 pm
Most AI governance discussions assume AI will "do the right thing" with properguidelines. This talk takes a different approach: assume drift, error, and
misalignment by default, then design mechanical structures that make bad
behavior expensive.
The core idea: systems are defined by what they refuse to do, not what they
can do. For AI in production, governance means explicit rejection zones,
constitutional constraints that cannot be bypassed, and separation between
decision-making and execution.
Five governance patterns: constitutional constraints (forbidden states and
behaviors), rejection zones (where AI cannot operate), separation of powers
(decision ≠ execution ≠ validation), retrieval-led reasoning (grounding over
pre-training), and mechanical enforcement (architecture over policy).
Developed while preparing experimental multi-agent systems for production
deployment in FinTech, where mistakes are irreversible and governance is
survival, not choice.
Empower Team Wide Vibe Coding with LLM Gateway and Security-First MCPs
Gabriel Koo - AWS Community Builder & Senior Lead Engineer at Bowtie Life Insurance
Rakshit Jain - AI Engineer II
Bowtie Life Insurance
Time: 3:00 pm
Scaling AI-assisted development from a few enthusiasts to 50+ software engineers isn't just about API keys - it's about governance, trust, and standardized workflows. How do you prevent "shadow AI" and budget chaos while granting safe access to production context?
At Bowtie, we adopted a layered security-first approach: First, control the traffic; second, secure the tools; third, standardize the behavior.
This session covers our journey building a production-grade Vibe coding platform:
Layer 1: The AI Gateway (LiteLLM on AWS Fargate + Amazon RDS)
We established a centralized choke point for all AI traffic. This enables cost attribution, DLP detection, and usage visibility. Crucially, we enforce this at the network level - blocking direct access to non-official API providers to ensure all usage is visible and governed.
Layer 2: Security-First MCP servers
We treat AI agents as "CLI versions" of our internal web apps. By building custom Model Context Protocol (MCP) servers that reuse existing permissions and authentications - using Amazon Cognito for internal APIs and OAuth for official SaaS MCP servers - the AI acts as a delegate of the developer with human approvals. No new service accounts, no "god-mode" bots - if you can't do it in the existing user interfaces, the Agent can't do it via MCPs.
Layer 3: Custom Skills for Standardization
Beyond just tools, we write custom Skills to guide the model's behavior, ensuring generated code aligns with our engineering standards and SOPs (e.g. grab a ticket, fetch knowledge base, apply a fix, close the ticket) before a PR is even opened.
Walk away with an architectural blueprint for democratizing AI access that satisfies the strictest security requirements while giving developers the friction-free vibe coding experience they crave.
What can go wrong when you ask your agent to go naughty?
Richard Fan - AWS Hero | An independent security researcher
Time: 3:30 pm
We security folks all want to secure the AI agents, restricting their capability, putting guardrails, using system prompts, ... just to make sure the agents don't go rogue.
But what if the AI agent is specifically designed to do bad things? Suddenly, the rule is changed. The instructions are encouraging agents to make harm, and the guardrails are half-opened, so as to allow the agent to do its job. In this scenario, what can go wrong?
In this session, Richard will talk about AWS Security Agent, an AI agent designed to do automated penetration tests. He will share the security issues he found on the agent and highlight the challenges when building agents that simulate harmful actions.
Important notes:
We’re partnering with AgentCon Hong Kong!
Our community will join AgentCon Hong Kong, a global developer conference focused on AI Agents and autonomous AI organized by the Global AI Community.
To attend this workshop, you must register on the official AgentCon page by the Global AI Hong Kong chapter as well!
Registration on this User Group page does not grant venue access.
Official Registration (Required):
https://globalai.community/chapters/hong-kong/events/agentcon-hong-kong/
Important Notes for All Attendees
1️⃣ Registration Required for Entry
All participants must register on the official AgentCon Hong Kong.
Venue access will only be granted to people who appear on the official attendee list.
2️⃣ Lunch Ticket Eligibility
Attendees who:
- Check in during the morning session, and
- Stay for and join the keynote
will receive a lunch ticket.
3️⃣ First‑Come‑First‑Served (If Over Capacity)
In case of high demand, lunch tickets will be distributed on a first‑check‑in, first‑served basis.
Arrive early to secure your ticket.
Join AgentCon and learn about agents. We are happy to have a workshop that day.
