AgentVerse - a Dockerized AI Agent Application
Building and Containerizing a Multi-Agent AI Web Application using Docker and Ollama
Introduction
The AI Agent System is a containerized web application
designed to perform multiple AI-driven tasks such as text summarization, resume
tailoring, code explanation, email generation, and meeting note preparation.
The system integrates Next.js as the frontend framework and Ollama
as the backend AI model service, enabling users to interact with several
specialized agents seamlessly.
The project was developed progressively across three parts. In the initial stages, the focus was on application logic
and frontend-backend integration. Later, the architecture was improved through
containerization using Docker and Docker Compose, enabling smooth
deployment and environment consistency. Finally, in last part, the system was
finalized, container images were published to Docker Hub.
This project demonstrates the modern approach to building
AI-assisted applications - combining frontend web technologies, large language
model APIs, and containerized microservices - all orchestrated through Docker
for reliability and scalability.
Objectives
Part 1: Initial Application and Single Container Setup
The main goal of Part 1 was to build the functional base of
the system and run it within a single containerized environment. The Next.js
application served as the main interface through which users could interact
with the AI agents. In this stage, the Ollama backend was not containerized;
instead, it ran directly on the host system.
The two AI agents implemented in this stage were:
- Summarizer
– Takes long input text or documents and generates a concise, readable
summary.
- Email
Writer – Converts user-provided notes or bullet points into
professionally structured emails.
The key objective of this part was to ensure that the frontend
could successfully communicate with the local Ollama backend through REST API
calls, and that the application’s agent logic worked correctly.
Part 2: Dual Container Setup with Docker Compose
In this part of the project, the architecture was significantly improved by
containerizing both components - the Next.js frontend and the Ollama
backend. This marked the shift from a partially containerized setup to a
fully containerized microservice design.
Communication between the containers was handled using Docker’s
internal networking, managed automatically by Docker Compose. Unlike the first phase, where API calls went to the local host system, we made all backend
requests using the internal container name (http://ollama:11434) in this phase, allowing
seamless inter-container communication.
This stage also introduced three new agents:
- Resume
Tailor – Customizes resumes to match specific job descriptions or
company requirements.
- Code
Explainer – Analyzes code snippets and explains their functionality in
simple terms.
- Meeting
Notes Generator – Converts raw transcripts or notes into structured,
readable meeting summaries.
By the end of this phase, the system was fully containerized,
stable, and capable of running entirely through docker-compose up, eliminating
all dependency issues.
Part 3: Finalization, Deployment, and Documentation
The last part was not focused on new code development. Instead, its
primary objective was to finalize the system, push both Docker
containers to Docker Hub, and prepare detailed documentation.
This stage emphasized real-world deployment readiness -
ensuring that the application could be pulled and run directly on any machine
without local setup.
Name of the Containers Involved and Download Links
Frontend Container – Next.js Application
The Next.js frontend container was custom-developed and built from scratch by the author, without using any pre-existing container images.
Purpose:
Provides the web interface that allows users to interact with the five AI
agents. It handles text input, displays model responses, and maintains
session-based interactions.
Base Image Used:
node:18-alpine - chosen for its lightweight footprint and optimization for
Node.js web applications.
Key Features:
- Built
using Next.js, a React-based framework for server-rendered web
apps.
- Communicates
with the Ollama backend through internal Docker networking.
- Uses
fetch() API for backend communication, defined in route.js.
Download Link (Docker Hub):
sankarraja/agentverse-web general | Docker Hub
Backend Container – Ollama Service
Purpose:
Hosts the AI models and provides inference capabilities for all agent tasks.
The Ollama container acts as a local model runtime accessible via HTTP API
endpoints.
Base Image Used:
ollama/ollama:latest - the official image that includes preconfigured model
runtime support.
Key Features:
- Handles
all model executions and inference requests.
- Supports
multiple models such as gemma3:1b.
- Provides
consistent backend availability independent of the host machine.
What modification is done in the containers after downloading
1. Model Setup: Pulled and configured the required large language models (e.g., gemma3:1b) inside the container for agent-specific tasks.
2. Port Configuration: Exposed port 11434 to enable HTTP-based communication with the frontend container.
3. Networking Integration: Connected the container to the Docker Compose internal network for seamless inter-container communication using the service name ollama.
4. Volume Mounting: Added a persistent volume to store downloaded models locally, preventing re-downloads across rebuilds.
5. Environment Optimization: Adjusted runtime settings and environment variables to ensure stable model inference and efficient resource utilization.
ollama/ollama - Docker Image | Docker Hub
Software and Tools Used
|
Software / Tool |
Purpose / Usage |
|
Next.js |
Framework for building the web interface using React and
server-side rendering. |
|
Ollama |
Model runtime backend responsible for handling LLM
inference. |
|
Node.js 18 (Alpine) |
Lightweight runtime used to execute the frontend
application. |
|
Docker |
Platform for building, packaging, and running containers. |
|
Docker Compose |
Tool for orchestrating multiple containers together. |
|
Visual Studio Code |
IDE used for writing and debugging the Next.js
application. |
|
GitHub |
Version control platform for managing project code. |
|
Docker Hub |
Registry for publishing and sharing container images. |
Overall Architecture of All Three Parts
The architecture evolved over three phases, each improving
upon the previous to reach a stable, modular, and portable system.
Part 1 – Single Container Application
In this stage, the Next.js frontend was containerized
while the Ollama backend ran natively on the host system. The
communication occurred via API calls to http://localhost:11434.
Containers and Tools Used:
- Container:
ai-agents-app (Next.js frontend)
- Software:
Node.js, Ollama (local), Docker
Input / Output:
- Input:
User text entered in the frontend interface.
- Output:
AI-generated summaries or emails displayed on the web page.
Part 2 – Multi-Container Setup Using Docker Compose
Here, the architecture shifted to a two-container
system where both the frontend and backend ran inside Docker. The
docker-compose.yaml file handled orchestration and network creation, allowing
containers to reference each other by name.
Containers Used:
- web
→ Frontend (Next.js)
- ollama
→ Backend (Ollama AI runtime)
Part 3 – Docker Hub Deployment and Validation
Finalized the project for deployment. Both containers
were built, tagged, and pushed to Docker Hub. The focus was on ensuring
portability and reproducibility.
Architecture Description
The AI Agent System follows a modular microservice
architecture. The Next.js container handles routing, user input, and
display logic, while the Ollama container provides inference
capabilities.
When a user selects an agent (for example, the Code
Explainer), the frontend sends the text and role instructions to the
backend API defined in /api/execute/route.js. The backend then runs the
corresponding model inside Ollama and streams the output back to the frontend.
This design completely removes dependency on the host environment, ensuring consistent performance and reproducibility.
Procedure
Part 1:
Part 2:
- Split
the architecture into two Dockerfiles - one for the frontend and one using
the official Ollama image.
- Created
a docker-compose.yaml to run both containers together.
- Updated
environment variables and API URLs to use internal service names.
- Added
dependency control with depends_on.
- Verified that both containers start correctly with docker-compose up.
Part 3:
- Logged
into Docker Hub using docker login.
- Tagged
both containers using docker tag.
- Pushed them to Docker Hub using docker push.
4. Pulled and ran the containers on a different system to ensure deployment worked
Outcomes
The final system demonstrated a stable, fully
containerized multi-agent AI web platform. It successfully integrated five
agents - Summarizer, Email Writer, Resume Tailor, Code Explainer, and Meeting
Notes Generator - each capable of handling distinct text-based tasks.
Containerization ensured:
- Consistent
behavior across machines.
- Simple
deployment using Docker Compose.
- Fast
startup with cached dependencies.
- Reproducible
builds and easy image distribution through Docker Hub.
Conclusion
The AI Agent System project effectively demonstrates how
modern web applications can integrate AI functionalities through containerized
microservices. Over the three development phases, it evolved from a
single-container prototype to a multi-container production-ready system
capable of being deployed anywhere.
The final phase specifically emphasized finalization, validation, and
documentation. By pushing both containers to Docker Hub, the project
achieved true portability and reproducibility. Through this process, I gained a
deep understanding of how containerization works - from writing Dockerfiles and
building images, to orchestrating multiple services and deploying them
publicly.
This project showcases how combining technologies like Next.js,
Ollama, and Docker can create powerful, modular AI systems - bridging
frontend interactivity, backend intelligence, and seamless deployment.
Acknowledgement
I would like to express my sincere gratitude to Mrs. Subbulakshmi T., Faculty of the School of Computer Science and Engineering, VIT Chennai, for her invaluable guidance and support throughout the development of this project as part of the Cloud Computing (BCSE408L) course. Her insightful feedback and encouragement were instrumental in shaping the direction and completion of this work.
I am also deeply thankful to the VIT SCOPE department for providing the academic environment and resources that made this project possible. My heartfelt appreciation extends to my peers and mentors for their constant motivation, constructive discussions, and technical assistance during various stages of development and testing.
Finally, I would like to acknowledge the open-source communities and official documentation of Next.js, Docker, and Ollama, which served as essential references throughout this project. Their contributions have been invaluable in enabling me to explore, experiment, and successfully implement the AgentVerse system.
References
- Ollama
Documentation – For running and managing local LLM models.
- Next.js
Official Docs – For frontend structure and API routes.
- Docker and Docker Compose Tutorials – For building and orchestrating containers.
Comments
Post a Comment