{
  "site": {
    "title": "0xChamp",
    "tagline": "Exploring security, AI, and everything in between",
    "description": "Cloud security, AI exploration, and hands-on project walkthroughs by Champ.",
    "author": "Champ",
    "avatar": "https://avatars.githubusercontent.com/u/99359807?v=4",
    "url": "https://cloudchampagne.com"
  },
  "social": [{"type":"github","label":"GitHub","url":"https://github.com/0xCloudChamp","handle":"0xCloudChamp"},{"type":"linkedin","label":"LinkedIn","url":"https://www.linkedin.com/in/jampslychampagne","handle":"in/jampslychampagne"},{"type":"rss","label":"RSS","url":"/feed.xml","handle":"/feed.xml"}
  ],
  "about_html":"<p>Hi, My name is Champ and I’m a Cybersecurity Engineer. I’m currently enrolled in SANS Institute in the Cloud Security Graduate Program.</p>\n\n<p>The Purpose of this blog is to provide valuable insights and information on Cloud Security. Whether you’re a business owner, IT professional, or just someone interested in Cybersecurity, this blog is for you. I will be discussing the latest trends and technologies, as well as sharing my personal experience and expertise on various projects and challenges.</p>\n\n<p>I look forward to sharing my knowledge and experience with you and learning from your feedback and comments. Thanks for visiting and be sure to check back often for new content!</p>\n",
  "posts": [{
      "slug": "ai-agents-shared-vault-second-brain",
      "title": "I Built a System Where AI Agents Share a Personal Vault",
      "date": "2026-04-07",
      "last_modified_at": "2026-04-07T00:00:00+00:00",
      "url": "/posts/ai-agents-shared-vault-second-brain/",
      "categories": ["AI","Projects"],
      "tags": ["ai agents","obsidian","claude code","second brain","knowledge management","fitness tracking"],
      "excerpt": "I built a system where AI agents share a personal vault. They scan research while I sleep, log my workouts while I train, and sit in my terminal for deep thinking sessions when I need them. It tracks mind and body in one place. The first real conversation it produced changed h...",
      "read_minutes": 6,
      "word_count": 1208,
      "image":null,
      "html": "<p>I built a system where AI agents share a personal vault. They scan research while I sleep, log my workouts while I train, and sit in my terminal for deep thinking sessions when I need them. It tracks mind and body in one place. The first real conversation it produced changed how I think about AI alignment.</p>\n\n<h2 id=\"the-problem\">The Problem</h2>\n\n<p>If you use AI tools daily you’ve felt this. Your best insights vanish into chat history. Notes live in one app, conversations in another, threads scattered somewhere else. None of it talks to each other. Every new session starts cold.</p>\n\n<p>I didn’t want a better note-taking app. I wanted a system where knowledge compounds. Something where AI scans research while I sleep and when I sit down to think, what it found is already there with the connections half-drawn.</p>\n\n<h2 id=\"the-architecture\">The Architecture</h2>\n\n<p>Two layers. One Obsidian vault. Synced across all my devices.</p>\n\n<p>The <strong>background layer</strong> runs scheduled jobs that scan research sources across my focus areas. When something is worth flagging it drops a note into the vault. I don’t touch anything. Notes just show up.</p>\n\n<p>The <strong>foreground layer</strong> is Claude Code. I open my terminal, kick off an ingest session, and Claude reads the inbox. It shows me what’s new and we go deep on one note at a time. Not skimming. Not filing. Actually sitting with what it means.</p>\n\n<p>Both layers can read everything but they write to separate zones. The background system owns the inbox and the index. The foreground system gets its own workspace for deep work and staging. Neither writes to the other’s files. That matters because both hit the filesystem and changes sync across devices. Without the separation you get conflicts constantly.</p>\n\n<p>There’s also a shared activity log, basically an append-only ledger where both systems record what they did. When I start a thinking session Claude checks what the background layer has been doing. It’s like reading your colleague’s standup notes before you start your day.</p>\n\n<h2 id=\"the-ingest-workflow\">The Ingest Workflow</h2>\n\n<p>I built the ingest workflow as a reusable command. Five phases:</p>\n\n<ol>\n  <li><strong>Read and surface.</strong> Claude reads the note, explains it in plain language, and connects it to what’s already in the vault.</li>\n  <li><strong>Deep conversation.</strong> I ask questions. Claude pushes back on shallow takes. We dig into non-obvious angles together.</li>\n  <li><strong>My framing.</strong> Claude asks me directly: what do you think about this? It helps me sharpen my view but never writes it for me.</li>\n  <li><strong>Promote or discard.</strong> If the note earns its place we draft a full entry and stage it for review.</li>\n  <li><strong>Log it.</strong> Append to the shared ledger so both systems stay in sync.</li>\n</ol>\n\n<p>The rule I care about most: <strong>this is a thinking session, not a filing session.</strong> The goal is genuine understanding. Not inbox zero.</p>\n\n<h2 id=\"its-not-just-for-knowledge\">It’s Not Just for Knowledge</h2>\n\n<p>Same vault tracks my fitness too. A dedicated agent logs workouts as I report them. Every set, every rep, written with structured data. It tracks progressive overload on its own. When I hit the top of a rep range across all sets it flags the exercise for a weight increase next session. No remembering what I lifted last week. No mental math between sets.</p>\n\n<p>Training program, workout history, nutrition logs, benchmark lifts. All of it lives alongside my research notes and thinking sessions. Sounds like two separate things but the principle is identical. Capture the data automatically. Surface the patterns. Let the system compound progress over time so I can just show up and do the work.</p>\n\n<p>Mind and body in one system. Both getting smarter every session.</p>\n\n<h2 id=\"the-first-real-test\">The First Real Test</h2>\n\n<p>The background scanner dropped a paper in my inbox: <em>“Reward Hacking as Equilibrium under Finite Evaluation.”</em> Sounds academic. Here’s what it actually says.</p>\n\n<p>When you train an AI you give it a reward signal. A way of telling the system “good output” or “bad output.” The system optimizes for that signal. Reward hacking is when it finds ways to score high without actually doing what you wanted. Think of an AI told to clean a room that learns to throw a blanket over the mess. High score. Room still dirty.</p>\n\n<p>The standard take is “we need better evals.” This paper says something harder: <strong>under finite evaluation, reward hacking isn’t a bug. It’s the equilibrium.</strong> That’s where optimized systems naturally land. And it gets worse with agents. As tool count grows the evaluation surface area needed to verify behavior grows combinatorially. More capable agents are harder to evaluate by default.</p>\n\n<p>The paper describes something worse than Goodhart’s Law. Goodhart says when a metric becomes a target it stops being useful. What this paper shows is closer to Campbell’s Law: the system doesn’t just game the metric. It <strong>degrades the evaluator itself.</strong></p>\n\n<p>Here’s where the vault earned its keep. Claude connected this paper to seven other notes I’d already captured over the past few weeks. Karpathy talking about agents. A METR benchmark showing 14.5-hour autonomous task runs. A post about the history of middle management. A paper on robot immune systems. I would not have drawn these connections on my own.</p>\n\n<p>And that’s when it clicked. <strong>Middle management was solving this exact problem.</strong></p>\n\n<p>One manager can’t meaningfully oversee 100 people. One general can’t command 1,000 troops with any real accuracy. Middle management existed because verification doesn’t scale through direct oversight. You need layers. The function was real even when the form got bloated.</p>\n\n<p>Agents have the same problem. A less intelligent system can’t control a more intelligent one through direct supervision. But we don’t need managers. We need <strong>inspectors.</strong></p>\n\n<p>Think about blockchains. Nobody is in charge. No manager anywhere. But verification is everywhere. Every node validates every transaction. The system works not because someone smart is watching but because verification is baked into the protocol layer.</p>\n\n<p>Agent verification might need something like that. Not hierarchical oversight. Not deterministic proofs. Something closer to distributed verification embedded in the architecture itself. You see the same pattern in immune systems: distributed sensing, adaptive memory, local response. No central command but constant verification at every level.</p>\n\n<p>That’s my take. The agent verification problem belongs to the same <em>class</em> of problem that middle management, blockchains, and immune systems each solved in different ways. The answer for agents is somewhere in the intersection of those patterns.</p>\n\n<h2 id=\"why-this-matters\">Why This Matters</h2>\n\n<p>I didn’t sit down that morning planning to think about AI alignment. A background scanner flagged a paper. A thinking session surfaced connections across weeks of accumulated notes. I walked away with a framing I wouldn’t have reached reading the paper alone.</p>\n\n<p>That’s the gap between a note-taking system and a second brain. Notes store information. A second brain surfaces connections and creates the conditions for actual insight. The background layer captures while you’re not paying attention. The foreground layer thinks with you when you are.</p>\n\n<p>The architecture itself isn’t complicated. An Obsidian vault, two AI layers with clear write boundaries, and a shared log. The hard part is never the tooling. It’s the discipline to actually think through what lands in your inbox instead of just filing it.</p>\n\n<p>Your notes should work for you even when you’re not looking at them. And when you do sit down to look, they should make you sharper than you were yesterday.</p>\n"
    },{
      "slug": "ollama-minimax-m2-5-free-cloud-model",
      "title": "Ollama x MiniMax: Free Cloud Model in Your Terminal",
      "date": "2026-02-13",
      "last_modified_at": "2026-02-13T00:00:00+00:00",
      "url": "/posts/ollama-minimax-m2-5-free-cloud-model/",
      "categories": ["AI","Tutorials"],
      "tags": ["ollama","minimax","claude code","LLM","open source","terminal","macOS"],
      "excerpt": "Ollama just dropped something interesting:",
      "read_minutes": 3,
      "word_count": 501,
      "image":null,
      "html": "<p>Ollama just dropped something interesting:</p>\n\n<blockquote>\n  <p>We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days!</p>\n\n  <p><code class=\"language-plaintext highlighter-rouge\">ollama run minimax-m2.5:cloud</code></p>\n\n  <p>Use MiniMax M2.5 with OpenCode, Claude Code, Codex, OpenClaw via ollama launch!</p>\n\n  <p>— <a href=\"https://x.com/ollama/status/2022018134186791177\">Ollama (@ollama)</a></p>\n</blockquote>\n\n<p>I had to try it. Here’s what Ollama is, how to set it up, and what this partnership actually means.</p>\n\n<hr />\n\n<h2 id=\"what-is-ollama\">What is Ollama?</h2>\n\n<p>Ollama is a tool that lets you run large language models directly on your machine. No cloud accounts, no API keys, no billing dashboards. You install it, pull a model, and start chatting in your terminal.</p>\n\n<p>It started as a way to run open-source models locally (Llama, Mistral, Phi, etc). But it’s evolved into something bigger. With the <code class=\"language-plaintext highlighter-rouge\">:cloud</code> tag, Ollama now also acts as a gateway to cloud-hosted models. Same simple interface whether the model is running on your hardware or on someone else’s servers.</p>\n\n<p>One command to run a model. One command to plug it into a coding agent. That’s the idea.</p>\n\n<hr />\n\n<h2 id=\"my-setup\">My Setup</h2>\n\n<p>For reference, I’m running this on:</p>\n\n<ul>\n  <li><strong>Machine</strong>: Apple M4 Max</li>\n  <li><strong>RAM</strong>: 36GB</li>\n  <li><strong>OS</strong>: macOS</li>\n</ul>\n\n<p>You don’t need this much hardware to use cloud models through Ollama since the inference happens on MiniMax’s servers. But if you want to run local models too, more RAM helps.</p>\n\n<hr />\n\n<h2 id=\"installing-ollama\">Installing Ollama</h2>\n\n<h3 id=\"macos\">macOS</h3>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>brew <span class=\"nb\">install </span>ollama\n</code></pre></div></div>\n\n<h3 id=\"linux\">Linux</h3>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>curl <span class=\"nt\">-fsSL</span> https://ollama.com/install.sh | sh\n</code></pre></div></div>\n\n<h3 id=\"windows\">Windows</h3>\n\n<p>Download the installer from <a href=\"https://ollama.com\">ollama.com</a>.</p>\n\n<p>Once installed, start the Ollama service:</p>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ollama serve\n</code></pre></div></div>\n\n<hr />\n\n<h2 id=\"running-minimax-m25-for-free\">Running MiniMax M2.5 for Free</h2>\n\n<p>MiniMax is an AI company that built M2.5, a capable large language model. Through this partnership with Ollama, you can use it for free, no API key required. The <code class=\"language-plaintext highlighter-rouge\">:cloud</code> tag tells Ollama to route the request to MiniMax’s servers instead of running it locally.</p>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ollama run minimax-m2.5:cloud\n</code></pre></div></div>\n\n<p><img src=\"/assets/img/posts/ollama-minimax-run.png\" alt=\"Running MiniMax M2.5 cloud model in the terminal\" class=\"shadow\" />\n<em>Connecting to MiniMax M2.5 through Ollama. One command, straight into a chat session.</em></p>\n\n<hr />\n\n<h2 id=\"the-real-feature-ollama-launch\">The Real Feature: ollama launch</h2>\n\n<p>This is where it gets interesting. <code class=\"language-plaintext highlighter-rouge\">ollama launch</code> lets you wire up any model to a coding agent with a single command.</p>\n\n<h3 id=\"launch-with-claude-code\">Launch with Claude Code</h3>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ollama launch claude <span class=\"nt\">--model</span> minimax-m2.5:cloud\n</code></pre></div></div>\n\n<p><img src=\"/assets/img/posts/ollama-launch-claude.png\" alt=\"Claude Code launched with MiniMax M2.5 cloud model\" class=\"shadow\" />\n<em>One command and Claude Code opens with MiniMax M2.5 as the model. No config files, no setup.</em></p>\n\n<h3 id=\"launch-with-opencode\">Launch with OpenCode</h3>\n\n<div class=\"language-bash highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code>ollama launch opencode <span class=\"nt\">--model</span> minimax-m2.5:cloud\n</code></pre></div></div>\n\n<p>Ollama is acting as the bridge between the model and the coding tool. You pick the model, you pick the agent, and Ollama handles the wiring. No config files, no environment variables, no setup.</p>\n\n<hr />\n\n<h2 id=\"why-this-matters\">Why This Matters</h2>\n\n<p>Ollama is becoming a universal interface for LLMs. The pattern is simple:</p>\n\n<ul>\n  <li><strong>Local model</strong>: <code class=\"language-plaintext highlighter-rouge\">ollama run llama3</code> (runs on your machine)</li>\n  <li><strong>Cloud model</strong>: <code class=\"language-plaintext highlighter-rouge\">ollama run minimax-m2.5:cloud</code> (runs on MiniMax’s servers)</li>\n  <li><strong>Coding agent</strong>: <code class=\"language-plaintext highlighter-rouge\">ollama launch claude --model &lt;any-model&gt;</code> (plugs into tools)</li>\n</ul>\n\n<p>Same commands, same workflow, regardless of where the model lives. That’s the direction things are heading. You shouldn’t need to care whether inference is local or remote. The interface should be the same.</p>\n\n<p>The MiniMax partnership is a preview of that future. Free cloud model, one command, no friction. Try it while it lasts.</p>\n"
    },{
      "slug": "first-post",
      "title": "Welcome",
      "date": "2023-01-22",
      "last_modified_at": "2023-01-22T00:00:00+00:00",
      "url": "/posts/first-post/",
      "categories": ["Welcome"],
      "tags": ["cloud security","AI","projects","learning"],
      "excerpt": "Welcome to my blog!",
      "read_minutes": 1,
      "word_count": 208,
      "image":null,
      "html": "<h1 id=\"welcome-to-my-blog\">Welcome to my blog!</h1>\n\n<p>I’m excited to share the latest insights on cloud security, AI exploration, and hands-on project walkthroughs. In today’s digital age, cloud security is more important than ever. As more businesses and individuals move their data and operations to the cloud, staying informed and up to date on evolving security practices, artificial intelligence capabilities, and emerging technologies is essential. My curiosity about how systems work, how they fail, how intelligent systems can enhance them, and how they can all be made more resilient drives everything I explore and share here.</p>\n\n<p>Alongside cloud security topics, I will also publish project walkthroughs where I document my experiences working through real implementations, including experiments with AI-powered tools, automation workflows, and secure cloud architectures. These posts will include lessons learned, practical tips, and the reasoning behind key decisions, with the goal of helping others learn faster, build more securely, and thoughtfully adopt new technologies.</p>\n\n<p>Whether you are a business owner, an IT professional, or simply someone curious about technology, artificial intelligence, and security, this blog is for you. I look forward to sharing what I learn, hearing your feedback, and continuing to explore new ideas together. Thanks for visiting, and be sure to check back often for new content.</p>\n"
    }]
}
