Deploy CrewAI Agent to the Cloud
This guide walks you through the process of deploying a CrewAI agent to a cloud endpoint. We'll use Itura's free tier to host the agent in a serverless environment. The deployment will be connected to a GitHub repository, so any changes to the code will automatically be reflected in the deployment.
What you need
- About 10 minutes
- A GitHub account
- A basic understanding of CrewAI
- The
uv
package manager (the default package manager for CrewAI) - An OpenAI API key (or other LLM API key)
What you will deploy
You will deploy a simple CrewAI agent to a serverless cloud endpoint. The endpoint will accept POST requests at a URL similar to https://<your-agent-slug>.agent.itura.ai/run
. When a POST request is sent to this URL, the agent will be initiated and will start executing. Environment variables can be added to the agent, which will be injected at runtime.
Our starting point is a simple CrewAI crew consisting of two agents that work together to create a research report on the current state of LLMs. The key entry point for the crew is the main.py
file.
#!/usr/bin/env python
from crewai_deployment_example.crew import CrewaiDeploymentExample
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI LLMs',
'current_year': '2025'
}
CrewaiDeploymentExample().crew().kickoff(inputs=inputs)
(Optional) Run the agent locally
Optionally, if you want, run the agent locally by:
- Adding the necessary entries to the
.env
file. For example:
MODEL=gpt-4o-mini
OPENAI_API_KEY=<your-openai-api-key>
- Using the CrewAI CLI:
crewai run
How to complete this guide
- Download and unzip the source repository for this guide, or clone it using Git:
git clone https://github.com/Itura-AI/crewai-deployment-example.git
cd
intocrewai-deployment-example
When you finish, you can check your work against the code in the crewai-deployment-example/complete
branch.
Deploying a CrewAI agent
1. Install Flask
To deploy the agent to the Itura cloud platform, we'll need to create a simple HTTP endpoint that will be used to initiate the agent process. We'll use Flask to create this endpoint. First, install Flask using the uv
package manager.
uv add flask
2. Create a /run endpoint
To initiate the agent process, Itura looks for a /run
endpoint that accepts POST requests. Let's add this endpoint to our agent code. Adjust the main.py
file to include the following:
#!/usr/bin/env python
from crewai_deployment_example.crew import CrewaiDeploymentExample
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/run', methods=['POST'])
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI LLMs',
'current_year': '2025'
}
# You can get the inputs from the request body
# inputs = request.json
try:
result = CrewaiDeploymentExample().crew().kickoff(inputs=inputs)
return jsonify({"output": result.raw})
except Exception as e:
return jsonify({"error": str(e)})
## This is optional,
## but uncomment if you want to run the Flask server locally
# if __name__ == '__main__':
# app.run(host='0.0.0.0', port=5000)
3. Create a requirements.txt file
You need to create a requirements.txt
file to specify the dependencies for our agent. Itura will use this file to install the dependencies for our agent when it is deployed.
uv pip compile pyproject.toml -o requirements.txt
Once you have the Flask run
endpoint and the requirements.txt
file, push your code to a GitHub repository.
4. Deploy to Itura Cloud
Go to app.itura.ai and create a new agent project. You'll be prompted to connect your GitHub account and select the repository and branch you want to deploy. Select the branch holding your agent code (with the Flask run
endpoint and requirements.txt
file), and click Deploy.
Note that the deployment might take a couple of minutes to complete.
Once the deployment is complete, you'll be able to see an auto-generated API key (e.g., sk-agent-aa3f96a3-43e9-448f-ad94-84a38e64c229
). Save this key in a secure location. You can generate a new key at any time from the project dashboard.
5. Add environment variables
When the deployment is complete, you will also be able to add environment variables to the agent. These will be injected at runtime with each request to the agent endpoint. Add the OpenAI API key as an environment variable from the UI.
6. Initiate the agent
Now that the agent is deployed, you can initiate one or more instances of it by sending a POST request to the agent endpoint. From the project dashboard, you can see the agent endpoint URL (e.g., https://<your-agent-slug>.agent.itura.ai/run
). Using this URL and the API key, you can initiate the agent by sending a POST request to the endpoint.
curl --request POST \
--url https://{agentSlug}.agent.itura.ai/run \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"input": "Your agent parameters here"
}'
If successful, you'll receive a 202 Accepted
response. This means your request is queued.
{
"run_id": "unique-run-id",
"message": "Run request accepted and queued for execution",
"status": "PENDING"
}
You can check the status of the run from Itura's dashboard. Or, by sending a GET request to the /status
endpoint.
The status endpoint is provided by Itura platform. So, you don't need to add it to your code.
curl --request GET \
--url https://{agentSlug}.agent.itura.ai/status/{run_id} \
--header 'Authorization: Bearer <token>'
Conclusion
That's it! You've just deployed a CrewAI agent to the cloud and initiated it using a simple HTTP request. From the Itura dashboard, you can see the agent's logs, metrics, and more. Though not covered in this guide, you can also parse the POST request body to the agent. This way, you can pass more complex data to the agent as input.
If you want to update the deployment code, you can do so by pushing a new commit to the GitHub repository. Itura will automatically detect the changes and update the deployment.