Predictive Forecasts
Look into the future. Forecast your user's emotional trajectory up to 15 minutes ahead.
Time as a Training Variable
While the real-time GBI and BCI indices tell you how a user feels now, the LFM's true power lies in using time as its primary dimension. By analyzing real-time kinematic data alongside historical baselines and SAAQ ground-truth labels, the LFM builds a deeply personal cognitive-affective profile for every user.
At its core, the LFM is a sequence predictor. Just as an LLM predicts the next word, the LFM predicts the next emotional state. It forecasts short-horizon state transitions (typically ±15 minutes), allowing you to proactively adapt experiences before a user even realizes they are getting frustrated or bored.
The Forecast Data Contract
Probability matrices and trajectory horizons.
When the LFM pushes a forecast down to the iOS SDK via WebSockets, it doesn't give a simple binary answer. It provides a probability distribution across various affective trajectories.
public enum AffectiveTrajectory: String {
case sustainedFlow // High probability of remaining engaged
case cognitiveBurnout // Path to BCI depletion
case tilt // Aggressive, frustrated usage loop
case sessionAbandonment // High probability they will close the app
case recovery // Returning to a healthy baseline
}
public struct LFMForecast {
// The time horizon this prediction covers (default is 15)
public let horizonMinutes: Int
// A dictionary of states and their probability (0.0 to 1.0)
public let probabilities: [AffectiveTrajectory: Float]
// The single most likely trajectory
public let primaryTrajectory: AffectiveTrajectory
// How far the user is currently drifting from their historical baseline
public let baselineDeviationScore: Float
// Helper method
public func getProbability(for trajectory: AffectiveTrajectory) -> Float {
return probabilities[trajectory] ?? 0.0
}
}Accessing the Data
Long-term sequence prediction is computed entirely in the LFM to preserve iOS device resources. Forecasts are pushed down asynchronously. You can consume these forecasts via two distinct routes.
Route 1: In-App (Swift Client)
The easiest way to consume forecasts is directly inside your iOS app. The SDK maintains a secure WebSocket connection and executes your Swift closures whenever a new forecast is available.
import UIKit
import Nx10Core
class AppCoordinator {
func startMonitoringForecasts() {
Nx10Core.shared.insights.observeForecast { [weak self] forecast in
guard let forecast = forecast else { return }
let churnRisk = forecast.getProbability(for: .sessionAbandonment)
let burnoutRisk = forecast.getProbability(for: .cognitiveBurnout)
if churnRisk > 0.85 {
print("[LFM WARNING] 85% chance of session end within \(forecast.horizonMinutes) mins.")
self?.triggerRetentionOffer()
}
else if burnoutRisk > 0.70 {
print("[LFM WARNING] User is approaching burnout.")
self?.simplifyUserInterface()
}
}
}
}Route 2: Server-to-Server Webhooks
If your app relies on heavy backend orchestration (e.g., dispatching APNs push notifications or modifying user permissions), configure the Nx10 Control Plane to fire Webhooks directly to your backend.
app.post('/api/nx10/webhooks', express.json(), (req, res) => {
const { deviceId, eventType, payload } = req.body;
if (eventType === 'lfm_forecast_alert') {
const forecast = payload.lfmForecast;
const churnRisk = forecast.probabilities['SessionAbandonment'] || 0;
if (churnRisk > 0.80) {
console.log(`[CRM] User ${deviceId} has an 80% chance of churning.`);
// Server automatically pushes an actionable offer to their phone
crmService.sendPushNotification(deviceId, "Wait! Here's 20% off your cart.");
}
}
res.status(200).send('Received');
});The Data Moat: Cold Start vs. Longitudinal
The true power of the LFM lies in the fact that it learns your specific users. But how does it work on day one?
1. The Cold Start (Phenotyping)
When a brand new user installs your app, the LFM has no history for them. Within their first few minutes of interacting, the SDK collects their base TAG data and compares it against population-wide Affective Phenotypes. It clusters them into a known archetype (e.g., "Aggressive Tapper, Low Tremor") to provide a highly accurate initial baseline and generalized forecast.
2. Longitudinal Observation (Per-User Embeddings)
As the user navigates your app over days and weeks, the LFM continually retrains. It generates a unique, compact vector (a Per-User Embedding) that represents their individual affective pattern. It learns what their specific frustration looks like, independent of the general population. This builds a massive Data Moat - the longer they use your app, the more profoundly the model understands them, and the more perfectly your app can adapt to them.
