Skip to main content

LLM-to-REST-API Integration Architecture for ValkyrAI

Executive Summary

The most efficient way for ValkyrAI's LLM system to create data via REST APIs is through a Three-Tier Command Execution System that leverages:

  1. Fragment System (Schema Context) - Fast, cached model schemas
  2. Command Processor (Execution Layer) - XML-based commands with ACL enforcement
  3. Context Propagation (Security Layer) - User context (browser) + SYSTEM context (server)

Architecture Overview

┌─────────────────────────────────────────────────────────────────────┐
│ LLM (SageChat/ValorIDE) │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ 1. Receives Schema Fragments (Cached, Fast) │ │
│ │ - Model schemas via Fragment System │ │
│ │ - ExecModule catalog │ │
│ │ - API endpoint documentation │ │
│ │ 2. Generates XML Commands │ │
│ │ <apiCommand action="create" model="Workflow">...</apiCommand>│ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────┐
│ Command Processor Layer │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ XML Parser → Validates → ACL Check → Executes │ │
│ │ │ │
│ │ BROWSER CONTEXT: SERVER CONTEXT: │ │
│ │ - Uses user's JWT token - Uses SYSTEM user │ │
│ │ - Respects user's ACL permissions - Trust proxy access │ │
│ │ - Executes in UI thread - Background execution │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────┐
│ REST API Layer │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ ThorAPI Generated Services │ │
│ │ - WorkflowService.createWorkflow(workflow) │ │
│ │ - TaskService.createTask(task) │ │
│ │ - ExecModuleService.createExecModule(module) │ │
│ │ + 200+ other auto-generated services │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────┐
│ Backend (Spring Boot) │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ ACL Enforcement → @PreAuthorize → JPA Repository → PostgreSQL │ │
│ │ - hasPermission(object, 'CREATE') │ │
│ │ - ROLE_ADMIN, ROLE_SYSTEM, ROLE_USER │ │
│ │ - @SecureField encryption for sensitive data │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘

1. Fragment System (Schema Context)

Purpose

Provide LLM with fast, cached model schemas so it knows how to construct valid API payloads.

Implementation

Backend Endpoint (Already Exists):

GET /v1/docs/fragments/minimal       → MinimalFragment (LLM-optimized)
GET /v1/docs/fragments/workflow → WorkflowFragment (full workflow context)
GET /v1/docs/fragments/schemas/:name → Specific model schema

Frontend Service (apiDocsService.ts):

// Fetch minimal fragment (cached 1 hour)
const fragment = await fetchMinimalFragment();

// Build LLM context (compact markdown)
const context = buildLLMContext(fragment);

LLM Context Example:

## ValkyrAI Workflow Schema

### Available ExecModules:

- com.valkyrlabs.workflow.modules.EmailModule
- com.valkyrlabs.workflow.modules.RestApiModule
- com.valkyrlabs.workflow.modules.SlackPostModule
...

### Workflow Structure:

{
"id": "uuid",
"name": "string",
"description": "string",
"tasks": [...],
"specs": [...]
}

### Task Structure:

{
"id": "uuid",
"workflow": {...},
"modules": [...],
"taskId": "string"
}

**Instructions**: When creating workflows, use only ExecModules from the list above.

Why This Works:

  • Fast: Cached in SessionStorage, 1-hour expiry
  • Compact: LLM-optimized (not full OpenAPI spec)
  • Accurate: Generated from ThorAPI OpenAPI specs
  • Always Current: Regenerated on backend build

2. Command Processor (Execution Layer)

Purpose

Parse LLM-generated XML commands and execute REST API calls with proper context.

Current Implementation

XML Command Format (SageChat/ValorIDE):

---COMMAND---
{
"action": "create-workflow",
"payload": {
"name": "My Workflow",
"description": "Test workflow",
"tasks": [...]
}
}
---END-COMMAND---

Parser Location:

  • web/typescript/valkyr_labs_com/src/components/SageChat/index.tsx (lines 1190-1270)
  • ValorIDE/webview-ui/src/services/ValorIDEMothershipIntegration.ts

Execution Flow:

// 1. Parse command from LLM response
const apiCommandRegex = /---COMMAND---(.+?)---END-COMMAND---/g;

// 2. Extract JSON payload
const parsed = JSON.parse(command);

// 3. Execute based on action type
if (parsed.action === "create-workflow") {
const result = await createWorkflowFromCommand(parsed.payload);
}

Proposed Enhancement: Unified Command Executor

Create: /valkyrai/src/main/java/com/valkyrlabs/llm/LLMCommandExecutor.java

@Service
public class LLMCommandExecutor {

@Autowired
private SystemUserService systemUserService;

@Autowired
private ApplicationContext context;

@Autowired
private ValkyrAclService aclService;

/**
* Execute LLM command with proper context
*
* @param command The command object from LLM
* @param userAuth User authentication (for browser context)
* @param useSystem Whether to use SYSTEM context (for trust proxy)
*/
public Object executeCommand(
LLMCommand command,
Authentication userAuth,
boolean useSystem
) {
// 1. Validate command structure
if (!isValidCommand(command)) {
throw new IllegalArgumentException("Invalid command structure");
}

// 2. ACL Pre-Check (before execution)
if (!isAllowedToCreate(command.getModel(), userAuth)) {
throw new AccessDeniedException("User not authorized to create " + command.getModel());
}

// 3. Execute in appropriate context
if (useSystem) {
// Server-side execution with SYSTEM privileges
return systemUserService.runAsSystem(() -> {
return executeInternal(command);
});
} else {
// Browser-side execution with user context
return executeInternal(command);
}
}

private Object executeInternal(LLMCommand command) {
// Get service bean dynamically
String serviceName = command.getModel() + "Service";
Object service = context.getBean(serviceName);

// Call appropriate CRUD method
switch (command.getAction()) {
case "create":
return invokeMethod(service, "saveOrUpdate", command.getPayload());
case "update":
return invokeMethod(service, "saveOrUpdate", command.getPayload());
case "delete":
return invokeMethod(service, "deleteById", command.getPayload().get("id"));
case "read":
return invokeMethod(service, "findById", command.getPayload().get("id"));
default:
throw new UnsupportedOperationException("Unknown action: " + command.getAction());
}
}

/**
* ACL whitelist: Models LLM can create
*/
private static final Set<String> ALLOWED_MODELS = Set.of(
"Workflow",
"Task",
"ExecModule",
"WorkflowState",
"IntegrationAccount", // If user has permission
"Goal",
"KeyMetric",
"Application"
// BLOCKED: Principal, AclEntry, Permission (security risk)
);

private boolean isAllowedToCreate(String model, Authentication auth) {
if (!ALLOWED_MODELS.contains(model)) {
return false;
}

// Additional ACL check
if (model.equals("IntegrationAccount")) {
return auth.getAuthorities().stream()
.anyMatch(a -> a.getAuthority().equals("ROLE_ADMIN"));
}

return true;
}
}

REST Controller:

@RestController
@RequestMapping("/v1/llm")
public class LLMCommandController {

@Autowired
private LLMCommandExecutor executor;

/**
* Execute LLM command in user context (browser)
*/
@PostMapping("/execute")
@PreAuthorize("hasRole('USER')")
public ResponseEntity<Object> executeUserCommand(@RequestBody LLMCommand command) {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
Object result = executor.executeCommand(command, auth, false);
return ResponseEntity.ok(result);
}

/**
* Execute LLM command in SYSTEM context (server)
*/
@PostMapping("/execute-system")
@PreAuthorize("hasRole('SYSTEM')")
public ResponseEntity<Object> executeSystemCommand(@RequestBody LLMCommand command) {
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
Object result = executor.executeCommand(command, auth, true);
return ResponseEntity.ok(result);
}
}

Frontend Service (LLMCommandService.ts):

import { api } from "../thorapi/redux/api";

export const llmCommandApi = api.injectEndpoints({
endpoints: (builder) => ({
executeLLMCommand: builder.mutation<any, LLMCommand>({
query: (command) => ({
url: "/llm/execute",
method: "POST",
body: command,
}),
invalidatesTags: (result, error, arg) => [
{ type: arg.model, id: "LIST" },
],
}),

executeLLMSystemCommand: builder.mutation<any, LLMCommand>({
query: (command) => ({
url: "/llm/execute-system",
method: "POST",
body: command,
}),
invalidatesTags: (result, error, arg) => [
{ type: arg.model, id: "LIST" },
],
}),
}),
});

export const {
useExecuteLLMCommandMutation,
useExecuteLLMSystemCommandMutation,
} = llmCommandApi;

3. Context Propagation (Security Layer)

Browser Context (User)

When to Use:

  • User is interacting with SageChat in browser
  • LLM creates data on behalf of user
  • User's ACL permissions apply

Implementation:

// SageChat detects command in LLM response
const [executeLLMCommand] = useExecuteLLMCommandMutation();

// Execute with user's JWT token (automatic via RTK Query)
const result = await executeLLMCommand({
action: "create",
model: "Workflow",
payload: {
name: "My Workflow",
description: "Created by LLM",
tasks: [...]
}
});

Security:

  • ✅ JWT token sent with request
  • ✅ Spring Security validates token
  • @PreAuthorize checks user permissions
  • ✅ ACL enforces object-level permissions

Server Context (SYSTEM)

When to Use:

  • ValorIDE executing remote commands
  • Background workflow execution
  • Scheduled jobs / Quartz tasks
  • Trust proxy scenarios

Implementation:

// Workflow background execution
@Async
public CompletableFuture<Workflow> executeWorkflow(Workflow workflow) {
// Propagate caller's auth to async thread
final Authentication callingAuth = SecurityContextHolder.getContext().getAuthentication();

return CompletableFuture.supplyAsync(() -> {
// Run as SYSTEM for database access
return systemUserService.runAsSystem(() -> {
// Execute workflow tasks...

// LLM can create data in SYSTEM context
if (task.requiresLLM()) {
LLMCommand command = buildCommandFromTask(task);
Object result = llmCommandExecutor.executeCommand(command, callingAuth, true);
}

return workflow;
});
});
}

Security:

  • ROLE_SYSTEM bypasses some ACL checks
  • ✅ Still logged and audited
  • ✅ Caller's auth preserved for audit trail
  • ✅ Used for "trust proxy" operations

4. ACL Enforcement (Safety Layer)

Model Whitelist

Allowed for LLM Creation:

// Safe models (business logic)
✅ Workflow
✅ Task
✅ ExecModule
✅ WorkflowState
✅ Goal
✅ KeyMetric
✅ Application
✅ IntegrationAccount (ADMIN only)

// BLOCKED (security risk)
❌ Principal (user accounts)
❌ AclEntry (permissions)
❌ Permission (access control)
❌ SecurityConfig (system config)

Permission Checks

Pre-Execution:

@PreAuthorize("hasPermission(#workflow, 'CREATE')")
public Workflow createWorkflow(Workflow workflow) {
// Only executes if user has CREATE permission
}

Post-Creation:

// Grant creator ADMIN permission on object
ObjectIdentity oid = new ObjectIdentityImpl(Workflow.class, workflow.getId());
valkyrAclService.grantPermission(oid, creator, BasePermission.ADMINISTRATION);

Field-Level Security

Sensitive Fields:

@Entity
public class IntegrationAccount {
@SecureField // AES-256 encrypted at rest
private String apiKey;

@SecureField
private String secretToken;
}

LLM cannot read encrypted fields (decrypted only when needed).


5. Workflow Integration

LLM Creates Workflow via Command

LLM Output (in SageChat response):

---COMMAND---
{
"action": "create",
"model": "Workflow",
"payload": {
"name": "Customer Onboarding Flow",
"description": "Automated customer onboarding with email and Slack notifications",
"tasks": [
{
"taskId": "validate_customer",
"modules": [
{
"className": "com.valkyrlabs.workflow.modules.RestApiModule",
"moduleData": "{\"url\":\"https://api.example.com/validate\",\"method\":\"POST\"}"
}
]
},
{
"taskId": "send_welcome_email",
"modules": [
{
"className": "com.valkyrlabs.workflow.modules.EmailModule",
"moduleData": "{\"to\":\"{{customer.email}}\",\"subject\":\"Welcome!\"}"
}
]
}
]
}
}
---END-COMMAND---

SageChat Execution:

// Parse command
const parsed = JSON.parse(commandText);

// Execute via LLM Command API
const [executeLLMCommand] = useExecuteLLMCommandMutation();
const result = await executeLLMCommand(parsed);

// Show result to user
addMessage({
role: "assistant",
content: `✅ Workflow created successfully! ID: ${result.id}`,
});

Backend Processing:

@PostMapping("/llm/execute")
public ResponseEntity<Object> executeUserCommand(@RequestBody LLMCommand command) {
// 1. Validate: Is this a Workflow?
if (!command.getModel().equals("Workflow")) {
return ResponseEntity.badRequest().body("Invalid model");
}

// 2. ACL Check: Can user create Workflows?
if (!aclService.hasPermission(userAuth, "Workflow", "CREATE")) {
return ResponseEntity.status(403).body("Not authorized");
}

// 3. Execute command
Workflow workflow = executor.executeCommand(command, userAuth, false);

// 4. Grant creator permissions
ObjectIdentity oid = new ObjectIdentityImpl(Workflow.class, workflow.getId());
aclService.grantPermission(oid, userAuth, BasePermission.ADMINISTRATION);

return ResponseEntity.ok(workflow);
}

6. ValorIDE Remote Execution

SWARM Coordination

ValorIDE can send commands to ValkyrAI backend via WebSocket:

ValorIDE → ValkyrAI:

// ValorIDE agent detects workflow creation needed
mothershipService.sendCommand({
type: 'llm_command',
command: {
action: 'create',
model: 'Workflow',
payload: { ... }
}
});

ValkyrAI Backend:

@MessageMapping("/llm/command")
public void handleLLMCommand(
@Payload LLMCommand command,
Principal principal
) {
// Execute in SYSTEM context (trust proxy)
Object result = llmCommandExecutor.executeCommand(
command,
principal,
true // useSystem = true
);

// Send result back via WebSocket
messagingTemplate.convertAndSendToUser(
principal.getName(),
"/queue/llm-results",
result
);
}

7. Performance Optimizations

Fragment Caching Strategy

Backend (Generated once per build):

  • Fragment files stored in generated/docs/fragments/
  • Loaded from filesystem (very fast)
  • No database queries needed

Frontend (SessionStorage, 1-hour expiry):

// Cache key
const cacheKey = `valkyrai:fragment:${fragmentName}`;

// Save to cache
sessionStorage.setItem(
cacheKey,
JSON.stringify({
data: fragment,
timestamp: Date.now(),
expiry: Date.now() + 60 * 60 * 1000, // 1 hour
})
);

// Check cache before fetch
const cached = sessionStorage.getItem(cacheKey);
if (cached) {
const { data, expiry } = JSON.parse(cached);
if (Date.now() < expiry) {
return data; // Return cached data
}
}

Batch Command Execution

LLM can generate multiple commands:

[
{"action": "create", "model": "Workflow", "payload": {...}},
{"action": "create", "model": "Task", "payload": {...}},
{"action": "create", "model": "ExecModule", "payload": {...}}
]

Backend processes in transaction:

@Transactional
public List<Object> executeBatch(List<LLMCommand> commands) {
List<Object> results = new ArrayList<>();
for (LLMCommand command : commands) {
results.add(executeCommand(command, auth, false));
}
return results;
}

8. Error Handling & Feedback

Validation Errors

Backend returns structured errors:

{
"error": "ValidationError",
"model": "Workflow",
"field": "tasks[0].modules[0].className",
"message": "Invalid ExecModule: com.example.NotFound",
"quickFix": {
"action": "selectValidModule",
"options": [
"com.valkyrlabs.workflow.modules.EmailModule",
"com.valkyrlabs.workflow.modules.RestApiModule"
]
}
}

Frontend displays to user:

try {
const result = await executeLLMCommand(command);
addMessage({ role: "assistant", content: `✅ Created ${result.name}` });
} catch (error) {
if (error.data?.quickFix) {
// Show quick fix options to LLM
addMessage({
role: "system",
content: `Error: ${
error.message
}. Available options: ${error.quickFix.options.join(", ")}`,
});
}
}

Audit Logging

All LLM commands logged:

@Aspect
@Component
public class LLMCommandAuditAspect {

@AfterReturning(
pointcut = "execution(* LLMCommandExecutor.executeCommand(..))",
returning = "result"
)
public void auditCommand(JoinPoint joinPoint, Object result) {
LLMCommand command = (LLMCommand) joinPoint.getArgs()[0];
Authentication auth = (Authentication) joinPoint.getArgs()[1];

// Log to EventLog
EventLog log = new EventLog();
log.setEventType("LLM_COMMAND");
log.setUserId(auth.getName());
log.setDetails(Json.stringify(command));
log.setResult(result != null ? "SUCCESS" : "FAILURE");

eventLogRepository.save(log);
}
}

9. Implementation Roadmap

Phase 1: Foundation (Week 1)

  • Fragment System (Already exists!)

    • fetchMinimalFragment()
    • buildLLMContext()
    • fetchSchema(name)
  • Command Parser (Already exists!)

    • ---COMMAND--- XML parsing
    • JSON payload extraction
  • 🚧 LLMCommandExecutor Service

    • Create Java service class
    • Implement ACL whitelist
    • Add dynamic service bean lookup
  • 🚧 REST Controller

    • /v1/llm/execute endpoint
    • /v1/llm/execute-system endpoint

Phase 2: Frontend Integration (Week 2)

  • 🚧 LLMCommandService (TypeScript)

    • RTK Query mutations
    • Error handling
    • Cache invalidation
  • 🚧 SageChat Integration

    • Auto-execute commands from LLM
    • Display results in chat
    • Handle errors gracefully

Phase 3: SWARM Integration (Week 3)

  • 🚧 WebSocket Command Handler

    • Receive commands from ValorIDE
    • Execute in SYSTEM context
    • Send results back
  • 🚧 MothershipService Enhancement

    • sendLLMCommand() method
    • Result callback handling

Phase 4: Testing & Hardening (Week 4)

  • 🚧 Integration Tests

    • Test all model types
    • Test ACL enforcement
    • Test error scenarios
  • 🚧 Performance Testing

    • Fragment cache hit rate
    • Command execution latency
    • Concurrent command handling

10. Security Considerations

Threat Model

Threat: LLM creates malicious data

  • Mitigation: ACL whitelist blocks security-critical models (Principal, AclEntry)

Threat: LLM bypasses permissions

  • Mitigation: @PreAuthorize checks before execution

Threat: LLM reads sensitive data

  • Mitigation: @SecureField encryption, fields never exposed to LLM

Threat: LLM executes as wrong user

  • Mitigation: Authentication propagation, audit logging

Threat: Command injection attacks

  • Mitigation: Strict JSON parsing, no eval(), validation before execution

Audit Trail

Every LLM command logged:

SELECT * FROM event_log
WHERE event_type = 'LLM_COMMAND'
ORDER BY created_at DESC;

Includes:

  • User ID (who executed)
  • Command (action + model + payload)
  • Result (success/failure)
  • Timestamp
  • IP address
  • Session ID

11. Comparison: Command Executor vs Direct REST

Pros:

  • Single endpoint for all models
  • Dynamic service lookup (no hardcoding)
  • ACL whitelist enforced centrally
  • Audit logging in one place
  • Easy to add new models (auto-supported)
  • LLM learns one pattern

Cons:

  • Slightly more complex backend implementation
  • Requires reflection for dynamic method invocation

Example:

POST /v1/llm/execute
{
"action": "create",
"model": "Workflow",
"payload": { "name": "My Workflow" }
}

Option B: Direct REST Calls ❌

Pros:

  • Uses existing REST endpoints (no new code)
  • Type-safe (no reflection)

Cons:

  • LLM must learn 200+ endpoints
  • No central ACL enforcement
  • Audit logging scattered across services
  • LLM output more error-prone (wrong endpoint URLs)
  • Harder to validate before execution

Example:

POST /v1/Workflow
{
"name": "My Workflow",
"description": "..."
}

Verdict: Command Executor is far superior for LLM integration.


12. LLM Prompt Example

System Prompt (injected into SageChat):

# ValkyrAI System API

You have access to create, read, update, and delete data in the ValkyrAI system via commands.

## Available Models

You can create these models:

- Workflow: Automation workflows with tasks and modules
- Task: Individual steps in a workflow
- ExecModule: Connectors (Email, REST, Slack, AWS, etc.)
- Goal: Business objectives
- KeyMetric: Measurable KPIs
- Application: User applications

## Model Schemas

[Fragment content injected here from fetchMinimalFragment()]

## Command Format

To create data, output:
---COMMAND---
{
"action": "create",
"model": "ModelName",
"payload": { ...model properties... }
}
---END-COMMAND---

## Example

User: "Create a workflow that sends an email when a new customer signs up"

You: I'll create that workflow for you.

---COMMAND---
{
"action": "create",
"model": "Workflow",
"payload": {
"name": "New Customer Onboarding",
"description": "Sends welcome email to new customers",
"tasks": [
{
"taskId": "send_welcome",
"modules": [
{
"className": "com.valkyrlabs.workflow.modules.EmailModule",
"moduleData": "{\"to\":\"{{customer.email}}\",\"subject\":\"Welcome!\",\"body\":\"Thank you for signing up!\"}"
}
]
}
]
}
}
---END-COMMAND---

The system will execute this command and return the created workflow.

13. Conclusion

The Winning Architecture

Fragment System + Command Executor + ACL Enforcement = 🎯 Perfect LLM-to-REST Integration

Why This Works:

  1. Fast: Fragment caching (< 10ms fetch)
  2. Secure: ACL whitelist + permission checks
  3. Robust: Validation + error handling + audit trail
  4. Simple: LLM learns one pattern (command format)
  5. Scalable: New models auto-supported
  6. Flexible: User context (browser) + SYSTEM context (server)

Next Steps:

  1. Implement LLMCommandExecutor service
  2. Create /v1/llm/execute REST endpoint
  3. Add LLMCommandService TypeScript hooks
  4. Integrate into SageChat command parser
  5. Test with real LLM workflows
  6. Deploy to production

ETA: 2 weeks for full implementation + testing


Appendix A: Code Locations

ComponentFile Path
Fragment Servicevalkyrai/src/main/java/.../api/ApiDocumentationService.java
Fragment EndpointGET /v1/docs/fragments/minimal
Frontend Fragment Serviceweb/.../src/services/apiDocsService.ts
Command Parser (SageChat)web/.../SageChat/index.tsx (lines 1190-1270)
ACL Servicevalkyrai/src/main/java/.../ValkyrAclService.java
SystemUserServicevalkyrai/src/main/java/.../SystemUserService.java
SWARM IntegrationValorIDE/webview-ui/.../ValorIDEMothershipIntegration.ts
Workflow Executionvalkyrai/src/main/java/.../ValkyrWorkflowService.java

Appendix B: Performance Metrics (Target)

OperationTarget LatencyNotes
Fragment fetch (cached)< 10msSessionStorage lookup
Fragment fetch (fresh)< 100msFilesystem read
Command validation< 50msACL check + schema validation
Command execution< 500msREST API call + DB persist
Batch commands (5 models)< 2sTransaction boundary

Appendix C: Security Checklist

  • ✅ ACL whitelist for model types
  • @PreAuthorize permission checks
  • @SecureField encryption for sensitive data
  • ✅ Audit logging for all commands
  • ✅ Authentication propagation (user + SYSTEM)
  • ✅ Input validation before execution
  • ✅ No eval() or dynamic code execution
  • ✅ Rate limiting on /llm/execute endpoint
  • ✅ CSRF protection (Spring Security)
  • ✅ JWT token validation

Author: GitHub Copilot + ValkyrAI Team
Date: October 26, 2025
Status: Architecture Complete, Implementation Pending
Version: 1.0