How to Fix LLM Date and Time Issues in Production

4 min read

I was recently working on a project to generate summarized reporting using the Anthropic Claude API. What looked good at first eventually revealed some odd behavior in production. This post explains the problem we ran into and how we resolved it.

The Problem

The following is adapted from a real production system but generalized for this post.

Take a theoretical SaaS application. The goal: generate a report of users who have low activity and identify those who are likely to churn. Let’s take a look at an example prompt:

require 'anthropic'
require 'json'

class ChurnRiskAnalyzer
  SYSTEM_PROMPT = <<~PROMPT
    You are a customer success analyst. Your job is to analyze user engagement
    data and identify customers at risk of churning.

    When analyzing users, identify which users are recently converted AND at high
    risk of churning due to low engagement. Pay special attention to:
    - Login frequency relative to their plan type
    - Feature adoption breadth
    - Time since trial conversion

    For each user, state:
    1. Days since conversion
    2. Whether they qualify as a "recent conversion"
    3. Your churn risk assessment and reasoning
  PROMPT

  def initialize
    @client = Anthropic::Client.new
  end

  def analyze(low_engagement_users)
    user_prompt = build_user_prompt(low_engagement_users)

    response = @client.messages.create(
      model: 'claude-sonnet-4-20250514',
      max_tokens: 1024,
      system: SYSTEM_PROMPT,
      messages: [
        { role: 'user', content: user_prompt }
      ]
    )

    response.content.first.text
  end

  private

  def build_user_prompt(users)
    <<~PROMPT
      Analyze the following low-engagement users from the past 30 days:

      #{JSON.pretty_generate(users)}
    PROMPT
  end
end

# Example usage
analyzer = ChurnRiskAnalyzer.new

low_engagement_users = [
  {
    engagement_id: 'eng_001',
    last_login: '2025-12-28',
    logins_past_30_days: 2,
    features_used: ['dashboard'],
    user: {
      id: 'usr_4821',
      email: '[email protected]',
      plan: 'pro',
      trial_converted_at: '2025-02-15',
      company: 'Acme Corp'
    }
  },
  {
    engagement_id: 'eng_002',
    last_login: '2025-12-20',
    logins_past_30_days: 1,
    features_used: [],
    user: {
      id: 'usr_9174',
      email: '[email protected]',
      plan: 'pro',
      trial_converted_at: '2025-12-01',
      company: 'NewStartup'
    }
  }
]

puts analyzer.analyze(low_engagement_users)

Let’s assume the date is December 29th, 2025.

In this example we have two users with low engagement. [email protected] converted back in February 2025 and has 2 logins in the past 30 days. [email protected] converted December 1st, 2025 and has just 1 login.

The expected behavior: flag Mike as at risk since he converted recently and has minimal engagement. What actually happened: both Mike and Sarah were flagged. Let’s look at why.

Lack of guidance

In the first version of our system prompt, we mention that we want to include recent conversions—but we never define what “recent” means. This leaves it up to the model to decide, which leads to non-deterministic and confusing results. The fix is to provide explicit guidance:

class ChurnRiskAnalyzer
  SYSTEM_PROMPT = <<~PROMPT
    You are a customer success analyst. Your job is to analyze user engagement
    data and identify customers at risk of churning.

    A "recent conversion" is defined as a user who converted from
    trial to paid within the past 30 days.

    When analyzing users, identify which users are recently converted AND at high
    risk of churning due to low engagement. Pay special attention to:

    - Login frequency relative to their plan type
    - Feature adoption breadth
    - Time since trial conversion

    For each user, state:
    1. Days since conversion
    2. Whether they qualify as a "recent conversion"
    3. Your churn risk assessment and reasoning
PROMPT

Now the model has explicit guidance on what makes a “recent conversion.” But we can’t stop here.

Providing a reference date

We’ve updated the prompt to provide explicit guidance on what makes a recent conversion, but there’s still one problem—the model doesn’t know what the current date is. LLMs have no system clock access; they only know what you tell them. One way to resolve this is to provide the date as part of the prompt:

  def build_user_prompt(users)
    <<~PROMPT
      Today's date is: #{Date.today.iso8601}.

      Analyze the following low-engagement users from the past 30 days:

      #{JSON.pretty_generate(users)}
    PROMPT
  end

Now the model has everything it needs for accurate reporting. Running this updated version correctly excludes Sarah, who signed up months ago, and flags only Mike.

Conclusion

When working with LLMs and time-sensitive data:

  1. Be explicit about definitions - Don’t assume the model interprets terms like “recent” the same way you do.
  2. Always provide the current date - LLMs have no awareness of real-time; include today’s date in your prompt.
  3. Test with edge cases - Run your prompts with data that spans different time periods to catch these issues early.

These might seem like small details, but in production systems where accuracy matters, they make the difference between useful analysis and misleading results. Subtle errors like these erode trust quickly.