

What we’re about
Ce groupe s'adresse à toutes les personnes intéressées par le développement d'applications, les APIs, le cloud, le cognitif ou encore l'intelligence artificielle. Professionnels de l'informatique, entrepreneurs, étudiants ou simple curieux sont invités à nous rejoindre. Les Meetups peuvent prendre la forme d'une table ronde, d'une conférence ou encore de sessions de mise en pratique, ils seront animés par des professionnels IBM passionnés, des témoignages clients, des intégrateurs...
Les Meetup ont pour sujet commun la plateforme IBM Coud dont vous trouverez la description plus bas. Vous pouvez tester gratuitement la plateforme en créant un compte IBM Cloud Lite Account (https://www.ibm.com/cloud/lite-account) sur le site d'IBM Cloud (https://www.ibm.com/cloud/)
What is IBM Cloud ?
https://www.youtube.com/watch?v=oCIZybdR_Y8
IBM's innovative cloud computing platform combines platform as a service (PaaS) with infrastructure as a service (IaaS) and includes a rich catalog of cloud services that can be easily integrated with PaaS and IaaS to build business applications rapidly.
IBM Cloud (formerly Bluemix) has deployments that fit your needs whether you are a small business that plans to scale, or a large enterprise that requires additional isolation. You can develop in a cloud without borders, where you can connect your private services to the public IBM Cloud services available from IBM. You and your team can access the apps, services, and infrastructure in IBM Cloud and use existing data, systems, processes, PaaS tools, and IaaS tools. Developers can tap into the rapidly growing ecosystem of available services and runtime frameworks to build applications using polyglot programming approaches.
With IBM Cloud, you no longer have to make large investments in hardware to test out or run a new app. Instead, we manage it all for you and only charge for what you use. IBM Cloud provides public, dedicated (https://console.bluemix.net/docs/dedicated/index.html), and local (https://console.bluemix.net/docs/local/index.html) integrated deployment models.
You can take an idea from inception, to development sandbox, to a globally distributed production environment with compute and storage infrastructure, open source platform services and containers, and software services and tools from IBM, Watson, and more. Beyond the capabilities of the platform itself, IBM® Cloud also provides flexible deployment. Provision IBM® Cloud resources on-premises, in dedicated private cloud environments, or in the public cloud, and manage the resources from all three types of environments in a single dashboard.
All IBM cloud resources that are deployed in public and dedicated environments are hosted from your choice of IBM® Cloud Data Center locations around the world. IBM Cloud Data Centers provide regional redundancy, a global network backbone connecting all data centers and points of presence, and stringent security controls and reporting. Through IBM Cloud Data Centers, IBM can meet your most demanding expansion, security, compliance, and data residency needs.
IBM enables you to:
- Deploy high performance compute and storage infrastructure in secure IBM Cloud Data Centers around the world.
- Test and adopt a broad range of cloud services and capabilities from IBM, open source communities, and third-party developers.
- Connect to all of your legacy systems and apps from a single, scalable, cloud platform through private network and API capabilities.
- Spin up and turn down resources in real time as your business needs or workload demands change. Apps
Sponsors
Upcoming events
2
- Network event•Online
[AI Alliance] How to Train Your LLM Web Agent: A Statistical Diagnosis
Online372 attendees from 115 groupsLLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents.
To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B or QWEN 2.5 7B student to imitate a Llama 3.3 70B teacher or QWEN 2.5 72B via supervised fine-tuning (SFT), followed by on-policy reinforcement learning (GRPO).
We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models.
Read the paper on ArXiv: How to Train Your LLM Web Agent: A Statistical Diagnosis (PDF)
About the speaker
I’m Massimo Caccia, Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus.
As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field.
Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow.
My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency.
About the AI Alliance
The AI Alliance is an international community of researchers, developers and organizational leaders committed to support and enhance open innovation across the AI technology landscape to accelerate progress, improve safety, security and trust in AI, and maximize benefits to people and society everywhere. Members of the AI Alliance believe that open innovation is essential to develop and achieve safe and responsible AI that benefit society rather than benefit a select few big players.Join the community
Sign up for the AI Alliance newsletter (check the website footer) and join our new AI Alliance Discord.6 attendees from this group
Past events
153