Complete reference for the AlphAstra REST API
All responses are returned in JSON format with appropriate HTTP status codes.
Performs a comprehensive evaluation of a website across 15 quality metrics grouped into 3 categories: Usability, Credibility, and Content Quality. Returns detailed scores and analysis.
| Parameter | Type | Required | Description |
|---|---|---|---|
url |
string | Required | The URL of the website to evaluate. Can include or omit https:// |
name |
string | Optional | Custom name for the evaluation |
GET /evaluate?url=https://example.com GET /evaluate?url=example.com&name=My%20Test
{
"global": 75,
"groups": [
{
"name": "usability",
"score": 85,
"categories": ["accessibility", "user_experience", "clarity", "security"]
},
{
"name": "credibility",
"score": 72,
"categories": ["veracity", "reliability", "scientificity", "sources", "plagiarism", "sentiment"]
},
{
"name": "content_quality",
"score": 68,
"categories": ["bias", "bullshit", "semantic", "vocabulary", "topics"]
}
],
"metrics": {
"accessibility": 90,
"user_experience": 85,
"clarity": 80,
"security": 85,
"veracity": 75,
"reliability": 70,
...
},
"meta": {
"title": "Example Website",
"description": "Website description",
"url": "https://example.com"
}
}
Retrieves a paginated list of all cached website evaluations with filtering and sorting capabilities.
| Parameter | Type | Default | Description |
|---|---|---|---|
page |
integer | 1 | Page number for pagination |
per_page |
integer | 10 | Number of results per page |
order_by |
string | global | Field to sort by (global, url, etc.) |
sort_order |
string | desc | Sort order: "asc" or "desc" |
filter |
string | - | Filter by URL, title, or description |
GET /scores?page=1&per_page=20&order_by=global&sort_order=desc GET /scores?filter=example.com
{
"pagination": {
"total": 45,
"pages": 5
},
"scores": [
{
"url": "https://example.com",
"global": 75,
"meta": {
"title": "Example Website"
}
}
],
"groups": [...],
"total_scores": 45,
"page": 1,
"per_page": 10
}
Performs AI-powered evaluation of a website using OpenAI GPT. Returns detailed analysis of content quality, credibility, and other AI-assessed metrics.
| Parameter | Type | Required | Description |
|---|---|---|---|
url |
string | Required | The URL of the website to analyze |
GET /ai?url=https://example.com
{
"scientificity": {
"score": 75,
"analysis": "Content shows moderate scientific backing..."
},
"clarity": {
"score": 82,
"analysis": "Content is well-structured and clear..."
},
...
}
Retrieves the definition of a word or term used in the evaluation metrics.
| Parameter | Type | Required | Description |
|---|---|---|---|
word |
string | Required | The word to define |
GET /definition?word=veracity
Retrieves debug information for a specific evaluation by UUID. Useful for troubleshooting.
| Parameter | Type | Required | Description |
|---|---|---|---|
uuid |
string | Required | The UUID of the evaluation |
GET /debug?uuid=abc123def456
| Code | Status | Description |
|---|---|---|
200 |
OK | Request successful |
400 |
Bad Request | Missing or invalid parameters |
500 |
Internal Server Error | Server error during processing |
fetch('http://localhost:7789/evaluate?url=example.com')
.then(response => response.json())
.then(data => {
console.log('Global Score:', data.global);
console.log('Groups:', data.groups);
})
.catch(error => console.error('Error:', error));
import requests
response = requests.get(
'http://localhost:7789/evaluate',
params={'url': 'example.com'}
)
data = response.json()
print(f"Global Score: {data['global']}")
print(f"Groups: {data['groups']}")
curl "http://localhost:7789/evaluate?url=example.com" curl "http://localhost:7789/scores?page=1&per_page=10"