MEMSEC - DEFCON901 - Memphis InfoSec social #29
Details
Come join us in room 225 at the Fedex Institute of Technology (FIT) at the UofM campus!
Large Language Models don’t have to live in someone else’s cloud. Current hardware makes it possible to run powerful AI models entirely on your own systems — giving you full control over capability, privacy, and experimentation.
This month, null0perat0r provides us a practical introduction to running local LLMs. We’ll cover the hardware needed to run models at home or in the lab, the tooling ecosystem (LM Studio, Ollama, vLLM, llama.cpp, Open WebUI, and more), and what terms like quantization, GGUF, and MLX actually mean in practice.
Beyond setup, we’ll explore why you should care:
- Keeping sensitive research and targets off third-party APIs
- Automating recon, analysis, and workflow augmentation
- Building AI assistants that can interact with real tools and local environments
- Running models offline, uncensored, and fully customizable
This meetup is FREE and open to the public with Garibaldi's pizza!
