Datasets:

Formats:
json
Size:
< 1K
ArXiv:
License:
HaoyuDong commited on
Commit
530700c
·
verified ·
1 Parent(s): a2cb7f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -38,6 +38,12 @@ This repository contains the dataset for **Finch**, an enterprise-grade benchmar
38
 
39
  ---
40
 
 
 
 
 
 
 
41
  ## Dataset Description
42
 
43
  Finch focuses on **messy and long-horizon finance & accounting workflows** that span:
@@ -67,6 +73,37 @@ We conduct both human and automated evaluations of frontier AI systems including
67
 
68
  ---
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  ## Examples
71
 
72
  Example 1: Review the Inv & WC Value Adj summary tab and add the missing cross‑sheet data references to the other worksheets so the roll‑up pulls the correct figures. Return the updated file with those links in place.
 
38
 
39
  ---
40
 
41
+ ## 🍻 Updates
42
+
43
+ * **2026-4-6**: FinWorkBench is accepted to ACL 2026 Findings.
44
+
45
+ ---
46
+
47
  ## Dataset Description
48
 
49
  Finch focuses on **messy and long-horizon finance & accounting workflows** that span:
 
73
 
74
  ---
75
 
76
+ ## 🔍 Why FINCH is Hard
77
+
78
+ Most of the individual capabilities Finch probes — reading tables, interpreting formulas, writing code, searching the web — are things frontier LLMs already appear to handle well on isolated benchmarks. Yet performance degrades sharply on Finch. Our analysis points to **five intertwined properties** of real-world enterprise F&A work that make failures more likely and more catastrophic:
79
+
80
+ 1. **Large, fragmented spreadsheet ecosystems.** Workflows routinely span dozens of interlinked workbooks and thousands of rows distributed across many sheets. Executing them accurately requires long-range cross-sheet navigation and precise referencing, which substantially increases the likelihood of small retrieval errors.
81
+
82
+ 2. **Dense, semantically homogeneous content.** Many cells contain domain-specific financial concepts that are subtly different yet lexically similar (e.g., variants of revenue/expense items, adjusted vs. unadjusted metrics), making entity disambiguation and cell grounding unusually difficult.
83
+
84
+ 3. **Complex and often irregular layouts.** Multi-level headers, merged cells, nested subtotals, and bespoke layouts force models to infer structure from noisy contents and ad hoc formatting. Tiny misinterpretations (e.g., off-by-one errors when specifying ranges) can propagate into globally incorrect outputs, especially when the same logic is applied in batch across many sheets.
85
+
86
+ 4. **Latent business logic encoded in formulas.** Formulas encode temporal assumptions, fine-grained dependencies, and business logic that is not visible from displayed values alone. For example, a column header `IF NGPL MidContinent index (@ Baker)` looks like a daily exposure metric, but the associated formula `25 * V21 + C41 * C22` actually encodes a 55-day payment timing. Models that prioritize cell values and under-use formulas systematically misinterpret such columns, and the error propagates through subsequent steps.
87
+
88
+ 5. **Multimodal, cross-artifact reasoning.** Many workflows combine spreadsheets with PDFs, charts, Word documents, and screenshots, requiring the agent to jointly reason over heterogeneous formats. Tables embedded in PDFs, for instance, are often only partially referenced, with key entries missing from the text channel.
89
+
90
+ It is the **combination** of these factors — rather than any single one — that drives the sharp performance degradation on real enterprise workflows, and it translates into substantial multi-step agent interaction at execution time (quantified below).
91
+
92
+ ---
93
+
94
+ ## 📊 Operational Complexity
95
+
96
+ Even successful Finch workflows demand substantial multi-step agent interaction. We conduct a case study using **Claude Coworker (Opus 4.6)** on 20 representative tasks (full table in the paper appendix). Excluding two web-search-heavy workflows that require 71 and 107 tool calls respectively — dominated by `websearch` / `webfetch` for external evidence gathering — the remaining tasks range from **6 to 25 tool calls** (mean 13.2, median 14, IQR 9–17).
97
+
98
+ Two observations stand out:
99
+
100
+ - **Web-grounded tasks incur dramatically higher overhead.** On the two outlier tasks, web search and fetch together account for 48–56% of all tool calls; external retrieval, cross-source comparison, and verification dominate cost.
101
+ - **Task count alone does not predict overhead.** Even two workflows with the same task-type label (e.g., both `Calculation`) can differ nearly 2× in tool calls — intrinsic business-logic reasoning (sign conventions, method selection, cross-sheet consistency) matters more than how many task types a workflow is tagged with.
102
+
103
+ In other words: even single- or two-task workflows can require a dozen or more interleaved read / compute / verify steps, so the practical complexity of Finch is substantially higher than task counts alone would suggest.
104
+
105
+ ---
106
+
107
  ## Examples
108
 
109
  Example 1: Review the Inv & WC Value Adj summary tab and add the missing cross‑sheet data references to the other worksheets so the roll‑up pulls the correct figures. Return the updated file with those links in place.