☀️ Columbus 11°C
// How social engineering cost Marks & Spencer £300 Million // The Backdoor (CVE-2024-3094): 500ms Delay that Saved the Internet From Disaster // PostgreSQL Normalization: The Easy Way vs. The Correct Way (1NF, 2NF, 3NF) // SQLite in Depth: Concurrency & Locked Error // How to Restore True Visitor IPs Behind Cloudflare Using NGINX and cf‑nginx (2026) // Understanding Network Layers: Complete Guide to OSI and TCP/IP Models (2026) // Networking Infrastructure: The Complete Guide From Topologies to Security (2026) // Automated Ubuntu VM Creation on Proxmox via Cloud-Init (2026) // Python Virtual Environments (venv) on Windows and Linux (2026) // How to Secure SSH on Debian 11 & 12 with User Creation and Fail2Ban (2026) // How to Set Up an Isolated VM Network in Proxmox with NAT (Step-by-Step) (2026) // How to Configure Locales on Debian (Fix Language and Encoding Issues)
7 min 83

SQLite in Depth: Concurrency & Locked Error

SQLite is a powerful, lightweight, and serverless relational database management system (RDBMS) widely used for small to mid-scale projects. Its main strengths lie in its ease of use, minimal configuration, and zero maintenance in most cases.

Introduction to SQLite

SQLite is a powerful, lightweight, and serverless relational database management system (RDBMS) widely used for small to mid-scale projects. Its main strengths lie in its ease of use, minimal configuration, and zero maintenance in most cases. It’s a great choice for applications like startups, portfolio projects, local tools, or simple apps you’re developing.

While considered “small,” SQLite is far from limited. It supports unlimited simultaneous readers, making it highly efficient for read-heavy workloads such as websites with thousands of daily visitors.

SQLite Performance: Reads vs Writes

However, only one writer can access the database at a time, meaning all other write operations must queue up. While this may seem like a limitation, write locks are typically held for just a few milliseconds, and SQLite can handle up to 50,000 INSERTs per second in theory limited by disk I/O. For most low to medium traffic sites (e.g., under 100,000 hits/day), this is more than sufficient.

When to Consider Other Databases

Once you hit 100k requests/day or more, it’s time to seriously consider upgrading to a client-server database system like PostgreSQL, MySQL, or even NoSQL solutions like MongoDB. Let’s walk through SQLite’s strengths, how to mitigate its file-level locking issues, and optimise it for higher traffic.

SQLite is ideal for applications where operations are mostly synchronous, and concurrency demands are manageable. It excels in environments where you don’t expect heavy concurrent writes, such as to-do lists, note-taking apps, or a budget tracking app. With proper optimisation, even moderate concurrency can be handled effectively.

Understanding the “Database Locked” Error

However, when multiple processes attempt to write to the database simultaneously, SQLite’s default locking mechanisms can cause a “database is locked” error. This typically happens when one write operation is still in progress, and another process tries to access the database for writing.

In this post, we’ll explore how it handles concurrency, ways to optimise its performance, and how to resolve the “database locked” error that can occur when handling simultaneous write requests.

Optimising SQLite For Concurrency

SQLite is available in many programming languages and environments, making it highly versatile. Since it’s a lightweight, serverless database, it can be embedded directly into various applications across languages like Python, Node.js, Java, C/C++, and more.

However, for the examples we’ll discuss today, I’ll be using Python, as it’s my primary language. I work with Python every day, constantly learning and solving new problems, so I’ll be demonstrating how SQLite integrates with Python to tackle real-world issues.

While SQLite isn’t designed to handle high concurrency as we discussed out of the box, there are a few strategies you can implement to reduce locking issues:

1. Proper Transaction Management

Improper transaction handling is one of the most common causes of database locks. If a transaction isn’t committed or rolled back correctly, the database can stay locked for longer than necessary, causing a bottleneck.

BEGIN TRANSACTION;
-- Insert data into the table
INSERT INTO workers (name, position) VALUES ('John Smith', 'Software Engineer');
-- Commit the transaction
COMMIT;

2. Use Database Connections Properly

Opening a database connection and not closing it properly is another common cause of database locks. Always close your database connection after your operations are done.

import sqlite3
from contextlib import closing

db_path = "database.db"

# Always Use Parameterised Queries to Mitigate Against SQL Injection 

with closing(sqlite3.connect(db_path)) as conn:
    with closing(conn.cursor()) as cursor:
        cursor.execute("INSERT INTO workers (name, position) VALUES (?, ?)", ('John Smith', 'Systems Engineer'))
    conn.commit()

3. Configure Timeout for Locks

In high-concurrency environments, setting a timeout allows SQLite to wait for a lock to be released before throwing a “locked” error. This can help manage occasional concurrency issues.

import sqlite3

# Set timeout to 10 seconds so the app waits instead of crashing immediately
conn = sqlite3.connect('database.db', timeout=10.0)  
cursor = conn.cursor()

try:
    cursor.execute("INSERT INTO workers (name, position) VALUES (?, ?)", ('Sam Smith', 'Data Analyst'))
    conn.commit() 
finally:
    cursor.close()
    conn.close()


4. Connection Pooling (For Advanced Use Cases)

When we talk about the handshake of opening a connection, we are talking about real time. Opening a new SQLite connection usually takes between 5ms and 20ms. While that seems small, if your blog gets a surge of 100 visitors at once, your server spends up to 2,000ms (2 full seconds) just opening and closing files instead of processing data.

With Connection Pooling, grabbing a ready-to-go connection takes less than 0.1ms. That makes the process roughly 100x to 200x faster. For an app handling 50,000 requests a day, pooling can save your server over 15 minutes of pure CPU “wasted time” every single day, keeping your site snappy even during high-traffic spikes.

To further optimise, we can enable WAL (Write-Ahead Logging) mode. By default, SQLite blocks readers while a writer is active. WAL mode allows multiple readers and one writer to work at the same time, significantly reducing “Database is Locked” errors.

Implementation of Connection Pooling


import sqlite3
from queue import Queue

class DatabasePool:
    def __init__(self, db_file, size=3):
        self.db_file = db_file
        # 1. SETUP: Create a thread-safe "bucket" (Queue) for our connections
        self.pool = Queue(maxsize=size)
        
        # 2. FILL: Create the connections once and store them
        for _ in range(size):
            conn = sqlite3.connect(self.db_file, check_same_thread=False)
            
            # PRO TIP: Enable WAL mode for better concurrency
            # This allows readers to stay active even while a write is happening
            conn.execute('PRAGMA journal_mode=WAL;')
            
            self.pool.put(conn)
        print(f"--- Pool initialized with {size} connections in WAL mode ---")

    def borrow_conn(self):
        # 3. GET: Take a connection out. If empty, this waits (blocks) for the next available one.
        return self.pool.get()

    def return_conn(self, conn):
        # 4. PUT BACK: Return the connection to the bucket for the next request.
        self.pool.put(conn)

# --- THE LOGIC IN ACTION ---

# Initialize our pool
my_pool = DatabasePool('blog_demo.db', size=3)

def run_query(query, params=()):
    # STEP A: Borrow a "ready-to-go" connection from the pool
    conn = my_pool.borrow_conn()
    
    try:
        # STEP B: Perform the database work
        cursor = conn.cursor()
        cursor.execute(query, params)
        conn.commit()
        return cursor.fetchall()
    
    finally:
        # STEP C: Always return the connection, even if the query fails!
        my_pool.return_conn(conn)
        print("Connection returned to pool.")

# Example Usage Always use Parameterised Queries To Mitigate Against SQL Injections:

run_query("CREATE TABLE IF NOT EXISTS users (id INTEGER, name TEXT)")
run_query("INSERT INTO users VALUES (?, ?)", (1, 'Alice'))
users = run_query("SELECT * FROM users")

print(f"Database Result: {users}")

Summary

This example demonstrates how to set up a basic connection pool using Python’s Queue. The DatabasePool class maintains a fixed number of SQLite connections, which can be borrowed and returned by your application logic, reducing the overhead of establishing new connections each time.

While SQLite is an excellent choice for small to medium-scale applications, understanding its concurrency limitations and optimising its performance can ensure smooth operation as your app scales. For high traffic environments, remember that thoughtful design, like connection pooling and proper transaction management, can help you make the most out of SQLite’s power though when your traffic really spikes, it’s time to consider upgrading to a more robust database solution like PostgreSQL or MongoDB.

Want to Dive Deeper Into Development ?

If you want to explore more about software development, databases, and building real-world projects, check out our other posts. While we often use Python for examples, the concepts apply across different languages and tools visit our development category.

5 1 vote
Article Rating

Related Posts

How social engineering cost Marks & Spencer £300 Million

Hackers from the Scattered Spider collective tricked Marks & Spencer staff into a password reset. Two months later, the 141-year-old retailer lost £300 million, was offline for 46 days, and had millions of customers’ data stolen in a ransomware attack that started with a single phone call.

David B. Feb 26, 2026
43
5 1 vote
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments