All blog posts
Developer 8 min read

How to Flatten Nested JSON to a Flat CSV (With Headers Preserved)

API responses are nested. Spreadsheets are flat. Here is the dot-notation rule that turns any nested JSON into a clean CSV — including arrays, mixed shapes, and missing fields.

You hit an API endpoint, get back a clean JSON response, and your manager asks for "the same thing in Excel." Five minutes later you're staring at a nested object three levels deep and wondering whether to write a script or just reformat 400 rows by hand. Neither is necessary — but the answer isn't a one-click conversion either, because flattening JSON to CSV involves real decisions about how nested fields, arrays, and missing values should appear in a flat grid.

This guide walks through the dot-notation rule that most converters use, the corner cases that trip everyone up (mixed-shape arrays, missing keys, deeply nested objects), and how to get a CSV that opens cleanly in Excel without the column-misalignment chaos.

Why "flat" is harder than it sounds

JSON is naturally hierarchical. A user record might look like this:

{
  "id": 42,
  "name": "Ada Lovelace",
  "address": {
    "city": "London",
    "zip": "WC2N"
  },
  "roles": ["admin", "editor"]
}

A spreadsheet has no concept of "address contains city and zip." It has columns. So the converter has to decide: do address.city and address.zip become two columns? Does the roles array become one column with comma-separated values, or two columns named roles[0] and roles[1]? There's no single right answer — but there is a widely-used convention.

The dot-notation rule

Most JSON-to-CSV tools (including ours) flatten nested objects into dot-paths. The example above produces these column headers:

id,name,address.city,address.zip,roles[0],roles[1]
42,Ada Lovelace,London,WC2N,admin,editor

Each level of nesting becomes a dot, each array index becomes a bracketed number. This is human-readable, reversible (you can rebuild the JSON from the headers), and Excel-friendly. The downside: deeply nested objects produce long column names. A response with user.profile.preferences.notifications.email.enabled turns into a column header that's wider than the data underneath it.

If you'd rather paste your JSON and see the flattened CSV immediately — including a header preview — our JSON to CSV converter handles this conversion in the browser with no upload.

Mixed-shape arrays — the silent killer

The dot-notation rule works perfectly when every record has the same shape. It breaks the moment your array contains objects with different keys:

[
  { "name": "Ada", "email": "ada@example.com" },
  { "name": "Grace", "phone": "+1-555-0100" }
]

A naive converter produces this:

name,email
Ada,ada@example.com
Grace,

Notice that Grace's phone disappeared entirely — the converter only saw email in the first object and built the schema from there. A correct converter scans every object first, builds the union of all keys, and emits empty cells where a value is missing. The output should be:

name,email,phone
Ada,ada@example.com,
Grace,,+1-555-0100

If your CSV is suspiciously narrow or some rows seem to be missing data, the most likely cause is a converter using the first row as the schema. Switch to one that does a full pre-scan — and if you suspect the JSON itself has invalid syntax, run it through a JSON formatter first to confirm it parses cleanly before debugging the conversion.

Arrays of primitives: indexed columns vs. join

For arrays like "tags": ["urgent", "open", "billing"], you have two reasonable choices:

  • Indexed columnstags[0], tags[1], tags[2]. Preserves order, but the column count varies row to row.
  • Joined string — single tags column containing "urgent;open;billing". Stable column count, but you've lost the structure.

Pick indexed columns if downstream consumers will pivot or filter by individual tags. Pick the joined string if the CSV is going to a human in Excel who'll just read the values. A common pitfall with the joined approach: don't use commas as the delimiter inside the joined string — they collide with CSV's column separator and break the file.

Excel quirks: encoding, leading zeros, and dates

Three Excel-specific issues bite almost everyone exporting JSON for spreadsheet users:

  • UTF-8 BOM. Excel on Windows opens UTF-8 CSVs as if they were Latin-1 — accents and emoji turn into garbage — unless the file starts with a UTF-8 byte order mark. A good converter adds the BOM by default.
  • Leading-zero strings. A US ZIP code like "00501" becomes 501 the moment Excel parses the column as numeric. The fix is to wrap the value as ="00501" in the CSV — ugly, but it forces text mode.
  • ISO date strings. Excel happily reformats 2026-05-07T14:30:00Z into a locale-dependent display string the moment you click the cell. If you need the original string preserved, the same = trick works.

Preserving headers across exports

A subtle problem when you're exporting the same endpoint repeatedly: if today's response happens to have no middle_name values populated, that column might disappear from the CSV entirely — and downstream pipelines that expected a fixed schema will break.

The fix is to define the expected headers once (from your JSON Schema or a sample full record) and force the converter to emit them, even when every value in the column is empty. Some converters expose this as an "explicit headers" option; others don't, in which case post-processing is required. The principle: schema first, data second. Don't let row-level sparsity erase columns from your CSV.

The reverse trip: CSV back to JSON

Dot-notation is reversible, which is its hidden superpower. If your CSV header is address.city, a CSV-to-JSON converter can rebuild the nested structure unambiguously. This matters when you're using a spreadsheet as the editing UI for hierarchical data — analysts edit a flat grid, you re-import the result, and the nesting reappears intact.

Two rules for round-tripping safely: never use literal dots inside your JSON keys (escape them or rename), and decide upfront whether roles[]-style array headers should preserve sparse indices (skipping roles[2] when roles[1] is missing) or compact them.

When flattening is the wrong answer

Some JSON shapes don't flatten cleanly. If you have an array of arrays of objects (a common shape for time-series data with multiple metrics), dot-notation produces unreadable column names like readings[0].metrics[2].value. For these cases, consider:

  • Long format. Pivot the data so each row represents one observation, with columns like reading_id, metric_name, value. This is the standard shape for analytics tools (Pandas, R, Tableau).
  • Multiple CSVs. Export each nested array to its own file with foreign keys — like a relational database export.
  • Stay in JSON. If the consumer can read JSON, give them JSON. CSV is the lowest common denominator, not the goal.

A quick pre-flight checklist

Before exporting any JSON to CSV for a real consumer:

  1. Validate the JSON parses (use a JSON formatter — invalid JSON produces silently broken CSVs).
  2. Scan all records for the union of keys. Verify your converter does too.
  3. Decide indexed vs. joined for arrays of primitives.
  4. Confirm UTF-8 BOM if the file is going to Excel on Windows.
  5. Compare the row count of your JSON array to the row count of your CSV (excluding the header). They should match.

If you're tracking changes between two JSON files before flattening — for example, comparing yesterday's API response to today's — a structural JSON diff is a better starting point than diffing the CSVs after conversion. CSV diffs amplify trivial reorderings into row-level "changes" that aren't really changes.

Bottom line

Flattening JSON to CSV is a series of small choices: how to encode nested keys, how to handle arrays, how to preserve schema across exports, how to make Excel happy. The dot-notation rule handles 95% of the cases cleanly. The remaining 5% — mixed-shape arrays, deeply nested structures, leading zeros — are where converters quietly differ. Pick one that does a full pre-scan, emits a UTF-8 BOM, and lets you preview the headers before committing the file. Most "broken" CSV exports trace back to one of those three.

More guides from the ToolsPlanet blog.