One bad row can fail an entire import job. Or worse — the import succeeds but your data is silently wrong. Contacts in the wrong fields, data shifted by one column, blank required fields. Here's how to catch structural problems before you upload anything.

Two kinds of import failures

When a CSV import goes wrong, it goes wrong in one of two ways. The first is visible: the import tool throws an error, tells you something's wrong, and nothing gets imported. Frustrating, but recoverable. You know you have a problem and where to start.

The second kind is invisible: the import runs, reports success, and your data appears to be there. But when you look more closely, city names are in the state column, phone numbers are in the ZIP code field, and half your contacts have blank email addresses. That's what happens when an unquoted comma in one row shifts every subsequent column one position to the right. The data is "there" — it's just in the wrong place.

Structural validation catches both kinds before they happen.

The five structural checks that matter

1. Blank column headers

A CSV header row with a missing name looks like this: FirstName,,Email,Phone. That second column has no name. Most import tools either reject the whole file or skip the unnamed column silently — losing all the data in it.

How this happens: someone deletes the content of a header cell in Excel but doesn't delete the column itself, then saves as CSV. The empty cell becomes an empty header field.

2. Duplicate column headers

Two columns with the same name: FirstName,LastName,Email,Email. When this imports, one of the Email columns wins and the other's data is lost. You can't predict which one survives — different CRMs handle this differently, and some don't warn you.

How this happens: manually combining exports from two different systems that both have an "Email" field, or a badly written export script that doesn't check for name collisions.

3. Inconsistent column counts

This is the most dangerous structural problem. Some rows have 8 fields; others have 9. The cause is almost always an unquoted comma in a data cell. The address 123 Main St, Suite 400 without surrounding quotes becomes two separate fields: 123 Main St and Suite 400. Every column from that point in that row is now shifted one position to the right.

The result: the value that should go into "City" goes into "State". The state goes into "ZIP". The ZIP goes into "Country". And so on. The import either fails on that row or silently places data in the wrong fields. For a file with thousands of rows, this is a data quality catastrophe.

This is the most dangerous validation failure because it doesn't always produce visible errors. The import "succeeds" but thousands of records have misaligned data. Always run a column count check before importing — especially if your data includes address fields, free-text notes, or any field that might contain commas.

4. Completely empty rows

Empty rows that sneak in below the actual data range in Excel. They import as phantom records — contacts or products that exist in the system with no information. Harmless but cluttering, and they inflate your record counts.

5. High-empty-rate columns

A column that's 80% or 90% empty is worth investigating before import. It could mean the export mapped the wrong field, a field wasn't populated in the source system, or a merge/join operation created lots of null values. Worth a look before you import 50,000 records with a blank "Company" field that you expected to be populated.

Running a validation check

CSV Splitter Online's Validate tab checks all five of the above in one automated step.

Step 1

Switch to the Validate tab

Go to csvsplitteronline.com and click "✓ Validate" in the tool mode bar.

Step 2

Drop your file

Drop your CSV, TSV, or XLSX file. Validation runs automatically as soon as the file is parsed — no extra button to click.

Step 3

Review the results

Results show three indicator levels: 🔴 Error (likely to cause import failure), 🟡 Warning (won't necessarily fail but worth reviewing), and 🔵 Info (data quality observations worth knowing about). Each issue shows the count of affected rows and, for column count problems, the first affected row number.

Step 4

Fix, then validate again

After fixing any issues in your file, run validation a second time. It takes seconds and confirms the fix worked correctly.

What to do when validation finds problems

Blank or duplicate headers

Headers are the first row of your CSV file. You can edit them directly in any text editor — open the CSV in VS Code, Notepad++, or even plain Notepad, and fix the first line. Or use CSV Splitter Online's Find & Replace to standardize header names if you have many files with inconsistent naming.

Inconsistent column counts (the comma problem)

This is the hardest to fix manually because you need to find the specific rows with the wrong count. The validator tells you how many bad rows there are and the first one. For small files, open in a text editor and search for lines that look longer or shorter than usual. For large files, Python is the most efficient approach:

import csv

    with open('data.csv', newline='') as f:
    reader = csv.reader(f)
    headers = next(reader)
    expected_cols = len(headers)
    for i, row in enumerate(reader, start=2):
    if len(row) != expected_cols:
    print(f'Row {i}: {len(row)} columns (expected {expected_cols})')
    print(f'  Content: {row}')
    

Once you identify the offending rows, the fix is usually to wrap the problematic field in double quotes. The address 123 Main St, Suite 400 should be "123 Main St, Suite 400" in the CSV.

Empty rows

The Split tab has a "Skip empty rows" option checked by default. If you're splitting before importing, empty rows are automatically removed. If you need to remove them without splitting, use the Row Filter: Column → any required field (like Email) → Condition: is empty → Action: Remove → download.

High-empty columns

First, decide if the column should be in the import at all. If the data genuinely isn't populated, the column isn't adding value. Use the Column Filter in CSV Splitter Online's Split tab to uncheck it — it won't appear in the output files. If the data should be populated but isn't, that's a data sourcing issue to investigate before importing.

Validation from the command line

For checking column count consistency on a large file quickly:

# Show all unique column counts and how many rows have each count
    # (if output is a single number, all rows are consistent)
    awk -F',' '{print NF}' file.csv | sort | uniq -c

    # Find blank headers (checks if first row has empty fields)
    head -1 file.csv | awk -F',' '{for(i=1;i<=NF;i++) if($i=="") print "Blank header at position", i}'

Note: the awk comma-split doesn't handle quoted fields correctly — a field like "Smith, John" counts as two fields. For CSV-aware column counting, the Python approach is more reliable.

Building validation into your import workflow

If you import CSVs regularly, build validation into your standard process rather than doing it ad hoc. The sequence:

  1. Export the source data
  2. Validate — run structural checks immediately after export
  3. Clean — fix any issues found (blank headers, inconsistent columns, empty rows)
  4. Validate again — confirm fixes worked
  5. Import

The validate-fix-validate loop adds maybe five minutes to an import workflow. The alternative — discovering a data quality problem after import, in a live CRM — can take hours or days to clean up. The math is obvious.

For recurring imports from the same source system, keep a validation checklist specific to that source. If HubSpot exports always have the "HubSpot Score" column that you don't need, add "Remove HubSpot Score column" to your checklist. If your payroll system always exports dates in MM/DD/YYYY but your HR system wants YYYY-MM-DD, add "Reformat dates" to the list. Systematize the cleanup so it's not a discovery process every time.

Validate your CSV before it causes problems

Catch blank headers, duplicate headers, inconsistent column counts, empty rows, and encoding issues in one automated check.

Try CSV Splitter Online free →

Frequently asked questions

My CRM's import worked but the data looks wrong — what happened?

This is almost always the inconsistent column count problem — an unquoted comma shifted data into the wrong columns. Run CSV Splitter Online's Validate tab on your original CSV file. If it reports rows with inconsistent column counts, that's the cause. Find the affected rows (validator shows the first bad row number), fix the unquoted commas, re-import.

Does CSV Splitter Online validate XLSX files?

Yes — drop an XLSX file on the Validate tab and it reads the first sheet and runs the same checks. The column count check is particularly useful for XLSX because Excel sometimes leaves "invisible" content (spaces, empty formatted cells) far to the right of your data range, which shows up as extra columns in the parsed output.

How do I find exactly which rows have inconsistent column counts?

CSV Splitter Online's validator tells you the count of bad rows and the first bad row number. For a complete list, use the Python script in the "What to do when validation finds problems" section above — it prints every bad row and its content. For a quick look in a text editor, the bad rows are usually visually longer or shorter than the surrounding rows once you know to look for them.

Should I validate before or after removing unnecessary columns?

Validate first, then clean. Structural problems like inconsistent column counts need to be diagnosed with the original column structure intact — removing columns first can obscure where the issue is. Once structural validation passes, then remove columns, then validate one more time to confirm the cleaned file is still good.

Ready to try it?

Use our free, browser-based CSV tools to apply what you've just learned — no upload, no signup.

Open Split CSV →