Deploy an MCP Server by Importing from GitHub
If you’ve already written and hosted your MCP server code on GitHub, Contexa makes it easy to deploy it directly - no manual setup required. Just provide the repository details, configure the environment, and go live in minutes.This method is perfect for developers who want to bring their own logic,
tooling, or custom server implementations into the Contexa ecosystem.
Prerequisites for Deploying a Server from a GitHub Repository
Prerequisites for Deploying a Server from a GitHub Repository
Before deploying a server from your GitHub repository, make sure it meets the following requirements:
-
Dockerfile present
Your repository must contain a validDockerfile
in the root or specified directory. -
Base Image Recommendation (Preferred)
Use an Alpine-based image (e.g.,python:3.11-alpine
,node:20-alpine
) for smaller image sizes and faster deployments. -
Port Exposure
TheDockerfile
must expose port 8080 -
Tracing (Optional)
If you want to enable request tracing, install and configure themcp-trace
library in your application.- Add it to your dependencies.
Step-by-step Instructions
-
Go to the “Create MCP Server” section and select the “Import from GitHub” option.
(You’ll see the “Import from GitHub” card with a GitHub icon and a “Select” button.) -
Fill in the required deployment settings:
- Server Name – Give your server a unique name (e.g.,
notion-mcp
) - Repository URL – Paste the GitHub repo URL (e.g.,
https://github.com/username/repository
) - Transport Type – Select your server’s supported transport protocol (e.g., HTTP, SSE, etc.)
- Endpoint Path – Define the route where your MCP server will be hosted (e.g.,
/api/mcp
)
- Server Name – Give your server a unique name (e.g.,
- Click “Next” to move to the Environment & Tracing step.
-
Configure any environment variables your server needs:
- Define key-value pairs (e.g.,
API_KEY=xyz123
) - You can add multiple variables using the
+
button. - Optionally, toggle on tracing to monitor tool calls and logs after deployment.
- Define key-value pairs (e.g.,
-
Click “Deploy.”
- Contexa builds and launches your server.
- You’ll receive a public URL where your MCP server is live and ready to be consumed by AI agents.
Once Deployed
- The server is now accessible through a single MCP-compatible endpoint.
- You can connect it with Cursor, Copilot Studio, Gemini, or any other agent framework that supports MCP.
- Manage your server via the Contexa dashboard for logs, updates, and scaling.
After deploying your server, you can start using the server across any MCP
compatible client by copying the config and pasting it in the mcp.json file
for your respective client.