Django apps tend to acquire CSV import flows the same way they acquire technical debt — gradually, then all at once. You ship a <input type="file">, a view that calls csv.DictReader, and a happy-path bulk_create. Six months later, support is fielding tickets about Excel files that won’t parse, European date formats, columns named “Email Addr” instead of “email”, and 100K-row uploads that time out before they reach your database.
This guide shows the alternative: a five-minute integration of Rowslint, a client-side CSV and Excel importer, with your Django backend. AI column mapping, multi-format parsing, async validators, and a clean bulk_create path on the server.
What is Rowslint
Rowslint is an embedded CSV and Excel importer for web apps. From a Django template, an HTMX page, or a DRF-backed React/Vue frontend, you call launchRowslint(), your users get a polished import flow, and you receive a JSON array of validated, typed rows on the server.
The importer parses files entirely in the browser. Django doesn’t need to handle file uploads, run csv.DictReader, or worry about Excel binary formats — only the cleaned rows arrive.
Step 1: Create a Rowslint account and template
Sign up for the free tier. In the dashboard:
- Create a template (e.g.
customers_v1). - Add columns matching your Django model fields.
- Set types and validators per column.
- Save and copy the template key + your organization API key.
Step 2: Install the package and expose the API key
If you’re using Django + Vite/Webpack:
npm install @rowslint/importer-js
If you’re using Django templates without a frontend bundler, you can include Rowslint via your existing JS pipeline (django-vite, django-webpack-loader) or a small inlined script tag during integration.
For the API key, expose it through Django settings to your template:
# settings.py
ROWSLINT_API_KEY = os.environ['ROWSLINT_API_KEY']
# context_processors.py
def rowslint(request):
return {'ROWSLINT_API_KEY': settings.ROWSLINT_API_KEY}
Add 'myapp.context_processors.rowslint' to TEMPLATES → OPTIONS → context_processors in settings.py.
Step 3: Add the importer to a Django template
{# templates/customers/list.html #}
{% extends 'base.html' %}
{% block content %}
<header>
<h1>Customers</h1>
<button id="import-btn" class="btn btn-primary">Import customers</button>
</header>
<table>...</table>
<script type="module">
import { launchRowslint } from '@rowslint/importer-js';
const csrf = document.querySelector('[name=csrfmiddlewaretoken]').value;
document.getElementById('import-btn').addEventListener('click', () => {
launchRowslint({
apiKey: '{{ ROWSLINT_API_KEY }}',
config: { templateKey: 'customers_v1' },
onImport: async (result) => {
if (result.status !== 'success') return;
const res = await fetch('/customers/bulk-import/', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRFToken': csrf,
},
body: JSON.stringify({ rows: result.data }),
});
if (res.ok) window.location.reload();
},
});
});
</script>
{% csrf_token %}
{% endblock %}
Step 4: Receive rows in a Django view
# customers/views.py
import json
from django.contrib.auth.decorators import login_required
from django.db import transaction
from django.http import JsonResponse, HttpResponseBadRequest
from django.views.decorators.http import require_POST
from django.utils.dateparse import parse_datetime
from .models import Customer
PLAN_CHOICES = {'free', 'pro', 'enterprise'}
@login_required
@require_POST
def bulk_import(request):
try:
payload = json.loads(request.body)
rows = payload['rows']
except (json.JSONDecodeError, KeyError):
return HttpResponseBadRequest('Invalid payload')
if not isinstance(rows, list) or len(rows) > 50_000:
return HttpResponseBadRequest('Too many rows')
# Normalize and validate every row server-side
customers = []
for row in rows:
if row.get('plan') not in PLAN_CHOICES:
continue
customers.append(Customer(
email=row['email'],
name=row['name'],
plan=row['plan'],
created_at=parse_datetime(row['created_at']),
))
inserted = 0
with transaction.atomic():
for batch in chunked(customers, 500):
Customer.objects.bulk_create(batch, ignore_conflicts=True)
inserted += len(batch)
return JsonResponse({'inserted': inserted})
def chunked(seq, size):
for i in range(0, len(seq), size):
yield seq[i:i + size]
Add the URL:
# customers/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('bulk-import/', views.bulk_import, name='customer_bulk_import'),
]
Three things this view gets right:
@login_required+ Django CSRF middleware secure the endpoint by default.- Re-validates plan/email server-side, even though Rowslint validated client-side.
bulk_createwithignore_conflicts=Trueis idempotent — re-running the import on duplicate emails won’t crash.
Step 5: DRF variant for API-first Django
If your frontend is React/Vue and your Django is DRF-only, use a serializer:
# customers/serializers.py
from rest_framework import serializers
from .models import Customer
class CustomerImportSerializer(serializers.ModelSerializer):
class Meta:
model = Customer
fields = ['email', 'name', 'plan', 'created_at']
class CustomerImportListSerializer(serializers.Serializer):
rows = CustomerImportSerializer(many=True)
def validate_rows(self, rows):
if len(rows) > 50_000:
raise serializers.ValidationError('Too many rows')
return rows
# customers/views.py
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
class CustomerBulkImportView(APIView):
permission_classes = [IsAuthenticated]
def post(self, request):
serializer = CustomerImportListSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
rows = serializer.validated_data['rows']
objs = [Customer(**row) for row in rows]
with transaction.atomic():
for batch in chunked(objs, 500):
Customer.objects.bulk_create(batch, ignore_conflicts=True)
return Response({'inserted': len(objs)}, status=201)
Step 6: Add async validation against your Django database
Configure async validators in your Rowslint template (dashboard) pointing at a Django view:
@login_required
@require_POST
def validate_email(request):
payload = json.loads(request.body)
email = payload.get('value', '')
exists = Customer.objects.filter(email=email).exists()
return JsonResponse({
'valid': not exists,
'message': 'A customer with this email already exists' if exists else None,
})
Users see inline validation errors during column mapping. They never re-upload a file.
Step 7: HTMX integration (optional)
For HTMX-backed Django apps, dispatch an HTMX-friendly event after import:
<button
id="import-btn"
hx-post="/customers/bulk-import/"
hx-trigger="rowslint:imported from:body"
hx-target="#customer-list"
hx-swap="outerHTML"
hx-vals='js:{rows: window._rowslintRows}'
>
Import customers
</button>
<script type="module">
import { launchRowslint } from '@rowslint/importer-js';
document.getElementById('import-btn').addEventListener('click', (e) => {
e.preventDefault();
launchRowslint({
apiKey: '{{ ROWSLINT_API_KEY }}',
config: { templateKey: 'customers_v1' },
onImport: (result) => {
if (result.status === 'success') {
window._rowslintRows = JSON.stringify(result.data);
document.body.dispatchEvent(new CustomEvent('rowslint:imported'));
}
},
});
});
</script>
Why Django teams pick Rowslint
- No file upload boilerplate. No
request.FILES, noMultiPartParser, no temp file cleanup. Just a JSON POST of clean rows. - No
pandas/openpyxldependency. Excel parsing on the server pulls in 30+ MB of dependencies and is a known security surface. Rowslint parses in the browser sandbox. - Server stays fast. A 100K-row CSV doesn’t pin a Gunicorn worker for 30 seconds. The user’s MacBook does the parsing.
Compared to Django CSV import packages
| Feature | Rowslint | django-import-export | django-csv-import | pandas in a view |
|---|---|---|---|---|
| Drop-in customer-facing UI | ✓ | admin only | basic | ✗ |
| AI column matching | ✓ | ✗ | ✗ | ✗ |
| Excel (XLSX) support | ✓ | ✓ | ✗ | ✓ |
| Async validation during mapping | ✓ | ✗ | ✗ | ✗ |
| Server CPU per import | low | high | medium | high |
| Setup time | < 5 min | ~1 day | ~2 days | ~4 hours |
django-import-export is the right call for the Django admin’s own CSV/Excel features. For customer-facing import, Rowslint is purpose-built.
Production-ready checklist
-
@login_requiredand CSRF protection on the bulk endpoint - DRF permissions if using API-first
-
bulk_create(batch_size=500, ignore_conflicts=True)for idempotent inserts -
transaction.atomicwrapping the import - Row-count limit (e.g. 50K) to prevent memory blowups
- Async validators wired for uniqueness checks
- Sentry/logging for failed imports
Conclusion
Django’s CSV import story doesn’t have to mean csv.DictReader, request.FILES, and a queue worker. With Rowslint, your Django backend receives a JSON array of clean, validated rows — and you get a polished, AI-powered import UI in your frontend with five minutes of integration.
Start with the free tier and ship CSV import in your Django app today. See the JavaScript SDK reference for the full API.
Frequently asked questions
- What is the best way to import CSV files in Django?
- For customer-facing import flows, the most maintainable approach is a client-side importer like Rowslint that hands cleaned, validated rows to a Django view. This avoids file upload endpoints, server-side CSV parsing, and the overhead of `pandas` or `csv` for what is fundamentally a UI problem. For internal management commands, Django's built-in `csv` module and `bulk_create` are still excellent choices.
- How do I integrate Rowslint with Django templates?
- Add `@rowslint/importer-js` via npm or include it from a CDN in your base template. Call `launchRowslint()` from a button click handler, and POST the validated rows to a Django view that handles the CSRF token. The setup takes five minutes and works with Django 4.2, 5.0, and 5.1.
- Does Rowslint work with Django REST Framework?
- Yes. Send the imported rows to a DRF `APIView` or `ViewSet` action that uses a serializer's `many=True` validation, then call `bulk_create()` to insert them. The article shows the full pattern with `ListSerializer.validate()` and chunked inserts.
- How do I bulk insert imported rows in Django?
- Use `Model.objects.bulk_create(objs, batch_size=500, ignore_conflicts=True)` for raw inserts. For databases that support `RETURNING` (Postgres), `bulk_create` returns the inserted instances with primary keys. For complex post-save logic, use `chunk()` and a regular `.save()` loop instead.
- Can I validate imported rows against my Django models?
- Yes. Configure async validators in your Rowslint template that POST to a Django view returning `{valid: bool, message: str}`. Users see inline errors during column mapping for uniqueness checks (`Model.objects.filter(...).exists()`), foreign keys, and any custom Django validation rule.
- Should I use Rowslint or pandas for CSV import in Django?
- Different jobs. Rowslint is for customer-facing UIs where end users upload spreadsheets. `pandas.read_csv()` is for data engineers and analysts processing files in notebooks or batch jobs. They solve different problems and many Django apps use both.