Skip to main contentGet up and running with your first MCP server in just a few steps. This guide walks you through signing in, deploying an MCP server, testing it in the Playground, and viewing traces.
Step 1: Sign In or Create an Account
Go to platform.contexaai.com and sign in using your credentials.
Don’t have an account? Click Sign Up to register in seconds.
Step 2: Deploy Your MCP Server
Choose one of the following ways to deploy:
🔍 Option 1: Use a Curated Server
Navigate to the MCP Directory.
Browse verified servers contributed by the verified developers and organizations.
Click Deploy on any server to set it up instantly.
🔗 Option 2: Bring Your Own Server
Go to Directory > Add Server > via GitHub. Deploy via Github
Enter public GitHub repo URL containing your MCP code.
Configure deployment settings and click Deploy.
📄 Option 3: Create from OpenAPI Spec
Navigate to Directory > Add Server > via OpenAPI. Deploy via OpenAPI
Upload your OpenAPI 3.0 spec file or paste the spec in the editor.
Name your server and click Deploy.
Step 3: Test in the Playground
Once your MCP server is deployed:
- Go to the Playground section.
- Select your MCP server from the dropdown list.
- Choose an LLM model (e.g., GPT-4, GPT-4o, GPT-4o-mini).
- Send a request and see live responses from your MCP server.
- Optionally, compare outputs across different models to evaluate performance.
Step 4: View Traces & Logs
- Navigate to Traces.
- Platform Logs: See all tool calls with timestamps, inputs, outputs, and status.
- Server Session Logs: View grouped logs by session for detailed analysis.
- Use filters to drill down into specific tools or calls.
✅ You’re All Set!
You’ve now:
- ✅ Signed in
- ✅ Deployed your first MCP server
- ✅ Tested it with real models
- ✅ Viewed logs and metrics
Start building intelligent, composable tools with the power of Model Context Protocols and LLMs.