Working with Time: Dates, Timestamps, and Temporal Queries
Every outage tells a story. The timestamps hold the clues.
The Problem
You are an ops engineer staring at two weeks of server event logs. Somewhere in those rows is the answer to why your API went down at 3 AM, whether that deploy on March 5th caused a cascade of errors, and which server is the noisiest offender. The data is there — it is just timestamped chaos.
To make sense of it, you need to bucket events by hour and day, measure gaps between failures, filter by time windows, and correlate events with deployments. TeideDB gives you the temporal toolkit to do all of this in SQL.
TeideDB's Temporal Type System
Before we dive into queries, it helps to know how TeideDB stores time internally:
| Type | Internal Storage | Literal Format |
|---|---|---|
| DATE | i32 — days since 2000-01-01 | '2025-03-01' |
| TIME | i32 — milliseconds since midnight | '14:30:00' or '14:30:00.500' |
| TIMESTAMP | i64 — microseconds since 2000-01-01 | '2025-03-01 08:30:00' |
String literals in 'YYYY-MM-DD HH:MM:SS' format are automatically parsed
into the correct temporal type when used in column context. You can also explicitly
convert with CAST.
Building the Dataset
We will work with two tables: server events and deployment records. This is a realistic slice of what you would see in a monitoring system over two weeks.
CREATE TABLE events (
id INTEGER, server_name VARCHAR, event_type VARCHAR,
severity INTEGER, occurred_at TIMESTAMP
);
INSERT INTO events VALUES
(1, 'web-01', 'error', 3, '2025-03-01 02:15:30'),
(2, 'web-01', 'warning', 2, '2025-03-01 02:16:00'),
(3, 'web-02', 'error', 4, '2025-03-01 08:30:00'),
(4, 'api-01', 'error', 5, '2025-03-01 08:31:15'),
(5, 'api-01', 'error', 5, '2025-03-01 08:32:00'),
(6, 'web-01', 'info', 1, '2025-03-02 10:00:00'),
(7, 'web-02', 'warning', 2, '2025-03-03 14:22:10'),
(8, 'api-01', 'error', 3, '2025-03-04 09:00:00'),
(9, 'web-01', 'error', 4, '2025-03-05 16:45:00'),
(10, 'db-01', 'error', 5, '2025-03-05 16:45:30'),
(11, 'db-01', 'error', 5, '2025-03-05 16:46:00'),
(12, 'web-02', 'info', 1, '2025-03-06 08:00:00'),
(13, 'api-01', 'warning', 2, '2025-03-07 11:30:00'),
(14, 'web-01', 'error', 3, '2025-03-08 03:00:00'),
(15, 'db-01', 'error', 4, '2025-03-08 03:01:00'),
(16, 'web-02', 'error', 3, '2025-03-09 20:15:00'),
(17, 'api-01', 'error', 4, '2025-03-10 12:00:00'),
(18, 'web-01', 'info', 1, '2025-03-11 09:00:00'),
(19, 'db-01', 'warning', 2, '2025-03-12 15:30:00'),
(20, 'api-01', 'error', 5, '2025-03-13 22:00:00');
CREATE TABLE deployments (
id INTEGER, service_name VARCHAR, deployed_at TIMESTAMP, version VARCHAR
);
INSERT INTO deployments VALUES
(1, 'web-app', '2025-03-01 02:00:00', 'v2.3.1'),
(2, 'api-svc', '2025-03-01 08:00:00', 'v1.9.0'),
(3, 'web-app', '2025-03-05 16:00:00', 'v2.4.0'),
(4, 'db-migrate', '2025-03-05 16:30:00', 'v3.1.0'),
(5, 'api-svc', '2025-03-10 11:00:00', 'v1.9.1'),
(6, 'web-app', '2025-03-13 21:00:00', 'v2.4.1');
A quick sanity check — let us see what we have:
SELECT COUNT(*) AS total_events FROM events;
SELECT COUNT(*) AS total_deploys FROM deployments;
Extracting Components with EXTRACT
The EXTRACT function pulls individual fields out of a temporal value.
This is your primary tool for grouping events by hour of day, day of week,
or month.
Supported fields: YEAR, MONTH, DAY,
HOUR, MINUTE, SECOND,
DOW (ISO day of week: Monday=1, Sunday=7), and EPOCH.
First question from the incident review: which hour of the day produces the most errors?
SELECT
EXTRACT(HOUR FROM occurred_at) AS hour_of_day,
COUNT(*) AS error_count
FROM events
WHERE event_type = 'error'
GROUP BY EXTRACT(HOUR FROM occurred_at)
ORDER BY error_count DESC;
Hours 8 and 16 are tied for the most errors. That is morning deploy time and late afternoon — suspicious. Let us also check which day of the week is worst:
SELECT
EXTRACT(DOW FROM occurred_at) AS day_of_week,
COUNT(*) AS error_count
FROM events
WHERE event_type = 'error'
GROUP BY EXTRACT(DOW FROM occurred_at)
ORDER BY error_count DESC;
Day 6 is Saturday and day 3 is Wednesday. Saturday dominates with 6 errors, likely from weekend deploys with fewer eyes on the dashboards — a pattern worth investigating.
Bucketing with DATE_TRUNC
DATE_TRUNC rounds a timestamp down to a specified precision. It is
perfect for building time-series histograms. Supported units:
'year', 'month', 'day',
'hour', 'minute', 'second'.
Let us get a daily error count across our two-week window:
SELECT
DATE_TRUNC('day', occurred_at) AS day,
COUNT(*) AS event_count,
SUM(CASE WHEN event_type = 'error' THEN 1 ELSE 0 END) AS error_count
FROM events
GROUP BY DATE_TRUNC('day', occurred_at)
ORDER BY day;
March 1st and March 5th jump out immediately — 4 and 3 errors respectively. And both coincide with deployments. We are building a picture.
For a closer look at March 1st, let us bucket by hour:
SELECT
DATE_TRUNC('hour', occurred_at) AS hour,
COUNT(*) AS event_count
FROM events
WHERE occurred_at BETWEEN CAST('2025-03-01 00:00:00' AS TIMESTAMP) AND CAST('2025-03-01 23:59:59' AS TIMESTAMP)
GROUP BY DATE_TRUNC('hour', occurred_at)
ORDER BY hour;
Two clusters: a pair of events at 2 AM (right after the web-app deploy at 02:00) and three events at 8 AM (right after the api-svc deploy at 08:00).
Measuring Time Gaps with DATE_DIFF
DATE_DIFF(unit, start, end) computes the difference between two
timestamps in the specified unit. Also available as DATEDIFF.
Supported units: 'second', 'minute',
'hour', 'day', 'month',
'year'.
How many days does our event window span?
SELECT DATE_DIFF('day',
CAST('2025-03-01 02:15:30' AS TIMESTAMP),
CAST('2025-03-13 22:00:00' AS TIMESTAMP)
) AS span_days;
Now a more interesting question: what is the average number of hours between errors, per server? We can use a self-join to pair errors on the same server and measure the gaps:
SELECT
e1.server_name,
AVG(DATE_DIFF('hour', e1.occurred_at, e2.occurred_at)) AS avg_hours_between_errors
FROM events e1, events e2
WHERE e1.server_name = e2.server_name
AND e1.event_type = 'error'
AND e2.event_type = 'error'
AND e2.id > e1.id
AND e2.occurred_at > e1.occurred_at
GROUP BY e1.server_name
ORDER BY avg_hours_between_errors;
db-01 has the tightest error clustering — the smallest average gap between error pairs. It is either fragile or getting hammered by something upstream.
Filtering by Time Windows
The BETWEEN operator works naturally with timestamps. Combined with
severity filters, you can quickly zero in on critical events in a specific window.
-- Critical errors (severity >= 4) in the first week
SELECT id, server_name, severity, occurred_at
FROM events
WHERE event_type = 'error'
AND severity >= 4
AND occurred_at BETWEEN CAST('2025-03-01 00:00:00' AS TIMESTAMP) AND CAST('2025-03-07 23:59:59' AS TIMESTAMP)
ORDER BY occurred_at;
Six critical errors in week one, and every single one falls within an hour of a deployment. That is not a coincidence.
You can also filter events after a specific point in time:
-- All events after March 10
SELECT id, server_name, event_type, occurred_at
FROM events
WHERE occurred_at >= CAST('2025-03-10 00:00:00' AS TIMESTAMP)
ORDER BY occurred_at;
The Investigation: Post-Deploy Error Spikes
This is the query that matters. We want to join events with deployments and find errors that occurred within 2 hours of a deploy. If a pattern emerges, we know our deployment process needs guardrails.
SELECT
d.version,
d.service_name,
d.deployed_at,
e.server_name,
e.severity,
e.occurred_at,
DATE_DIFF('minute', d.deployed_at, e.occurred_at) AS minutes_after_deploy
FROM events e, deployments d
WHERE e.event_type = 'error'
AND e.occurred_at >= d.deployed_at
AND DATE_DIFF('minute', d.deployed_at, e.occurred_at) <= 120
ORDER BY d.deployed_at, e.occurred_at;
There it is. Every single deployment in this dataset has at least one error within two hours. The v1.9.0 api-svc deploy triggered three errors in under two minutes. The v2.4.0 web-app and v3.1.0 db-migrate deploys on March 5th produced a cascade across web-01 and db-01.
Let us summarize by deployment:
SELECT
d.version,
d.service_name,
COUNT(*) AS errors_within_2h,
MAX(e.severity) AS max_severity
FROM events e, deployments d
WHERE e.event_type = 'error'
AND e.occurred_at >= d.deployed_at
AND DATE_DIFF('minute', d.deployed_at, e.occurred_at) <= 120
GROUP BY d.version, d.service_name
ORDER BY errors_within_2h DESC;
CAST and Constant Dates
Sometimes you need to convert a string to a temporal type explicitly, or grab the current date. TeideDB supports both.
-- Explicit type conversion
SELECT CAST('2025-03-15' AS DATE);
SELECT CAST('14:30:00' AS TIME);
SELECT CAST('2025-03-15 09:30:00' AS TIMESTAMP);
-- Current date and time (computed at query plan time)
SELECT CURRENT_DATE();
SELECT NOW();
SELECT CURRENT_TIMESTAMP();
These are useful for computing how long ago something happened:
-- Days since last error per server
SELECT
server_name,
MAX(occurred_at) AS last_error,
DATE_DIFF('day', MAX(occurred_at), CURRENT_TIMESTAMP()) AS days_since
FROM events
WHERE event_type = 'error'
GROUP BY server_name
ORDER BY days_since;
CURRENT_DATE(), NOW(), and
CURRENT_TIMESTAMP() are evaluated once at query plan time, not per
row. This means they return the same value for every row in a single query, which
is the behavior you want for consistent filtering and reporting.
Incident Report: The Final Query
Let us pull everything together into a single incident report. We want a daily summary that flags days with deployments and ranks servers by error count.
SELECT
DATE_TRUNC('day', e.occurred_at) AS day,
e.server_name,
COUNT(*) AS error_count,
MAX(e.severity) AS max_severity
FROM events e
WHERE e.event_type = 'error'
GROUP BY DATE_TRUNC('day', e.occurred_at), e.server_name
ORDER BY day, error_count DESC;
Cross-reference with the deployments table: March 1, 5, 10, and 13 all had deploys. The highest severity errors (5) cluster on those days. The data makes the case for staging environments and canary deploys better than any meeting ever could.
Challenges
EXTRACT(HOUR FROM occurred_at) with a CASE expression to assign each
event to a shift, then GROUP BY the shift label. Which shift has the most
severity-5 errors?
COUNT(DISTINCT e.server_name) with the same cross-join
pattern from the investigation section, but tighten the DATE_DIFF
threshold to 60 minutes.
What's Next
Temporal queries let you turn raw timestamps into operational insight. You have seen how to extract components, bucket into time periods, measure gaps, filter windows, and correlate events with external data like deployments.
If your data has relationships — servers depending on services, services calling other services — the next step is to model those connections as a graph. Head to Graph Queries (SQL/PGQ) to learn how TeideDB handles graph pattern matching natively in SQL.