The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: TypeError
Message: Couldn't cast array of type
struct<information-extraction: double, multi-session-reasoning: double, temporal-reasoning: double, knowledge-updates: double>
to
{'temporal-reasoning': Value('float64')}
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2233, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2011, in cast_array_to_feature
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2101, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<information-extraction: double, multi-session-reasoning: double, temporal-reasoning: double, knowledge-updates: double>
to
{'temporal-reasoning': Value('float64')}Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Feather DB — Benchmark Result Audit Trail
Per-run JSON results for Feather DB v0.8.0 on canonical retrieval and memory benchmarks. Every number cited in the report and arXiv paper maps back to one of these files.
Why this dataset exists
Most "AI memory" tools publish marketing numbers without a reproducible audit trail. We disagree with that practice. Every JSON here is a complete record of one benchmark run — config, environment, per-axis scores, failure traces, wall time, hardware. If you re-run the same command and get different numbers, please open an issue with your JSON.
Headlines (from these files)
| Run | Variant | Answerer | Overall | File |
|---|---|---|---|---|
| LongMemEval, decay on, GPT-4o | S | gpt-4o | 0.693 | longmemeval__s__20260426_110536.json |
| LongMemEval, decay on, Gemini-Flash | S | gemini-2.5-flash | 0.657 | longmemeval__s__20260426_025723.json |
| LongMemEval, decay on | oracle | gemini-2.5-flash | 0.670 | longmemeval__oracle__20260425_200250.json |
| LongMemEval, no decay | oracle | gemini-2.5-flash | 0.656 | longmemeval__oracle__20260425_191858.json |
| SIFT1M @ 500K, ef sweep | sift1m | n/a (ANN only) | recall@10=0.972 | vector_ann_real__sift1m__20260425_154057.json |
Schema (each JSON)
{
"scenario": "longmemeval | vector_ann | vector_ann_real",
"dataset": "oracle | s | sift1m | siftsmall | synthetic",
"n": 500, // questions or vectors
"dim": 1536, // embedder / vector dimensionality
"feather_version": "0.8.0",
"python_version": "3.12.x",
"platform": "Darwin arm64",
"commit": "<short git sha>",
"started_at": <unix timestamp>,
"wall_seconds": 16326,
"params": { ... knobs (k, ef, decay, embedder, judge) ... },
"metrics": {
"overall": 0.693,
"by_axis": { /* per-axis means */ },
"by_question_type": { /* per-type means */ },
"per_q_seconds_p50": 32.1,
"per_q_seconds_mean": 32.4,
"n_questions": 500,
"n_scored": 495,
"n_failures": 5,
"failures_sample": [ /* up to 10 failure rows */ ],
"embedder": "azure_text-embedding-3-small_d1536",
"judge": "llm_judge=gemini/gemini-2.0-flash_ans=azure/gpt-4o-feather",
"decay_engaged": true,
"decay_half_life_days": 14.0,
"decay_time_weight": 0.4
}
}
How to load
from datasets import load_dataset
ds = load_dataset("Hawky-ai/feather-db-benchmarks")
print(ds)
# Each row corresponds to one benchmark run.
# Filter by scenario / dataset / answerer in your analysis.
Or just clone and parse directly:
import json, glob
runs = [json.load(open(p)) for p in glob.glob("*.json")]
How to reproduce
The harness lives at bench/ in the source repo. Each scenario has a single-line CLI:
# LongMemEval (the headline)
python -m bench run longmemeval --dataset s --limit 0 \
--embedder openai \
--answerer-provider azure --answerer-model gpt-4o-feather \
--judge llm --judge-provider gemini --judge-model gemini-2.0-flash \
--decay-half-life 14 --decay-time-weight 0.4 --k 10
# SIFT1M ANN sweep
python -m bench run vector_ann_real --dataset sift1m \
--n 500000 --queries 1000 --k 10 \
--ef-sweep "10,50,100,200"
Result JSONs auto-write into bench/results/ and a Markdown rolled-up table into bench/reports/latest.md. New JSONs go here.
What's NOT in here
- The LongMemEval source dataset itself (lives at
xiaowu0162/longmemeval-cleaned, not redistributed by us). - The SIFT1M source dataset (downloads from
corpus-texmex.irisa.fr). - API keys, embedder weights, LLM responses verbatim. We log enough metadata to reproduce; we don't republish closed-model outputs.
License
MIT. Same as Feather DB.
Citation
If you use these results in academic work, cite the Feather DB paper:
@article{featherdb2026,
title = {Feather DB: An Embedded Vector Database with Adaptive Temporal
Decay and Hybrid BM25/Dense Retrieval},
author = {Hawky.ai},
year = {2026},
journal = {arXiv preprint},
url = {https://github.com/feather-store/feather/blob/master/docs/featherdb_paper.pdf}
}
And cite the underlying benchmarks:
- LongMemEval: Xu et al., 2024 (arXiv:2410.10813)
- SIFT1M: Jégou, Douze, Schmid, 2011 (INRIA report)
Maintained by Hawky.ai. Last update: 2026-04-26.
- Downloads last month
- 33