Best Tips & Tutorial
Optimize Odoo module performance when you follow step‑by‑step profiling and query tuning. First, you will learn to use the built‑in Developer Tools and Profiler in your browser. Next, you will apply Python profiling with cProfile
inside Odoo Shell. Then, you will identify slow database queries using PostgreSQL extensions such as pg_stat_statements
and optimize them by adding proper indexes. Finally, you will reduce ORM overhead and avoid N+1 patterns to boost module speed. By the end of this tutorial, you will master practical strategies to improve your custom module efficiency and deliver snappier user experiences.
Profile Frontend and Browser Bottlenecks
Use Odoo Developer Tools Profiler
First, activate Developer Mode in Odoo by adding ?debug=1
to your URL or toggling it from the user menu. Then, open your browser’s Developer Tools (often via F12 or right‑click “Inspect”). Next, switch to the Network tab and enable Performance or Profiler. In recent versions, you will find a built‑in Profiler under “Developer Tools → Profiler” that records JavaScript and network calls. Optimize Odoo Module Performance
For example, when you load a heavy Kanban view or Dashboard, you can click Start Profiling, perform the action, and then click Stop Profiling. The Profiler will report CPU usage, rendering times, and network request durations. You will also see WebSocket traffic and XHR calls to routes like /web/dataset/search_read
or /web/dataset/call_kw
. These calls often carry slow SQL queries as payloads.
# Steps in browser:
1. Open Odoo with debug=1
2. F12 → Developer Tools
3. Click Profiler → Start profiling
4. Reproduce user action (e.g., open Sales Orders)
5. Click Profiler → Stop profiling
6. Inspect timeline and network waterfall
Moreover, you can write a small JavaScript snippet in the Console to capture network logs programmatically. For instance:
let requests = [];
let origOpen = XMLHttpRequest.prototype.open;
XMLHttpRequest.prototype.open = function (method, url) {
this.addEventListener('load', () => {
requests.push({ method, url, duration: this.getResponseHeader('X-Response-Time') });
});
return origOpen.apply(this, arguments);
};
This script logs each XHR call’s URL and custom response time header. Then, you can inspect requests
in the Console to see which endpoints cause delays.
Capture WebSocket and Log Outputs
After identifying frontend delays, you will inspect WebSocket traffic that Odoo uses to push real‑time updates. In the Network tab, filter by “WS” or “websocket”. Then, select each frame to view JSON payloads. You may notice large payloads or repeated calls to /longpolling/poll
. These issues often indicate server‑side bottlenecks or missing caching.
Additionally, you can write a Python script to tail Odoo logs and extract slow endpoints:
import re
import time
LOG_PATH = '/var/log/odoo/odoo-server.log'
pattern = re.compile(r'\[([0-9\.]+)s\] .* route: (/web/dataset/.*)')
with open(LOG_PATH) as f:
f.seek(0, 2)
while True:
line = f.readline()
if not line:
time.sleep(0.1)
continue
match = pattern.search(line)
if match and float(match.group(1)) > 0.5:
print(f"Slow call: {match.group(2)} took {match.group(1)}s")
This script watches your Odoo log file for calls that exceed 0.5 seconds and prints each slow route. You can adjust the threshold as needed.
Profile Python Code with Odoo Shell
Enable Profiling Twice for Accurate Results
Then, you will profile Python code inside the Odoo Shell. First, start the shell:
odoo shell -d my_database
Next, import the built‑in profiling context provided by Odoo. You need to enable profiling twice: once for the current user session and once for all users. This quirk arises because Odoo initializes the profiler on session start, then reinitializes on request.
# in Odoo Shell
from odoo.tools.profiler import profile
profile.enable() # Enable profiling for current user
profile.enable(True) # Enable profiling for all users globally
After enabling, execute the code path you want to measure, for example:
# Example: call a custom method that imports data
self.env['my.module'].execute_mass_import(records)
Finally, disable profiling and inspect the results:
profile.disable()
profile.print_stats() # Show top slow functions
Analyze Profiling Output
Moreover, you can dump profiling data to a file and analyze it with snakeviz:
profile.dump_stats('/tmp/profile.stats')
Then, exit the shell and run:
snakeviz /tmp/profile.stats
SnakeViz will open an interactive flame graph in your browser, which helps you pinpoint slow Python functions. You can see which methods spend the most CPU time and optimize accordingly.
Identify Slow Database Queries
Enable pg_stat_statements Extension
Next, you will inspect slow SQL queries. First, you need to install and enable the pg_stat_statements
extension in PostgreSQL. Connect to your database:
sudo -u postgres psql my_database
Then, run:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
This extension tracks execution statistics for all SQL statements. You can view details on the official PostgreSQL docs: https://www.postgresql.org/docs/current/pgstatstatements.html.
Query pg_stat_statements for Slow Queries
After enabling, you will query the extension to list the top ten slowest statements:
SELECT
calls,
total_time,
mean_time,
query
FROM pg_stat_statements
WHERE database = 'my_database'
ORDER BY total_time DESC
LIMIT 10;
This query returns the most time‑consuming SQL calls. You can then inspect each query
field to see which table joins or filters cause delays. Often, you will spot missing indexes or queries that scan large tables.
Optimize SQL and Add Indexes
Analyze Query Plans with EXPLAIN ANALYZE
Then, you will use EXPLAIN ANALYZE
to get detailed plans:
EXPLAIN ANALYZE
SELECT partner_id, SUM(amount_total)
FROM account_invoice
WHERE date_invoice >= '2023-01-01'
GROUP BY partner_id;
The output will show time spent on each operation. If you see “Seq Scan” on a large table, that indicates a full table scan.
Add Appropriate Indexes
Next, you will add indexes on frequently filtered fields. For example, if you filter invoices by date_invoice
, create a B‑tree index:
CREATE INDEX idx_account_invoice_date
ON account_invoice (date_invoice);
If you filter by a combination of fields, use a composite index:
CREATE INDEX idx_partner_date
ON account_invoice (partner_id, date_invoice);
Use Partial Indexes for Selective Queries
Moreover, you can improve performance further with partial indexes. If you frequently query only posted invoices, add:
CREATE INDEX idx_invoice_posted
ON account_invoice (partner_id, date_invoice)
WHERE state = 'posted';
This index covers fewer rows and speeds up queries that include state = 'posted'
.
Reduce ORM Overhead and Avoid N+1 Queries
Prefetch Related Records
First, you will reduce N+1 query patterns by prefetching related records. Instead of:
invoices = self.env['account.invoice'].search(domain)
for inv in invoices:
print(inv.partner_id.name) # triggers one query per invoice
You can use read
or mapped
:
invoices = self.env['account.invoice'].search(domain)
data = invoices.read(['id', 'partner_id'])
for rec in data:
print(rec['partner_id'][1]) # no extra queries
Alternatively, use read_group
to aggregate without loading full records:
result = self.env['account.invoice'].read_group(
[('state', '=', 'posted')], ['partner_id'], ['partner_id']
)
Use Domain Filters Early
Then, push filters into the initial search
to avoid loading unnecessary records:
# Bad: loads all records, then filters in Python
records = self.env['my.model'].search([])
filtered = [r for r in records if r.field_x == value]
# Good: filter at database level
records = self.env['my.model'].search([('field_x', '=', value)])
Leverage Caching and Memory Optimizations
Enable Odoo Cache Decorators
Next, you will decorate expensive methods with @cache
from odoo.tools.cache
to memoize results:
from odoo.tools import cache
class MyModel(models.Model):
_name = 'my.model'
@cache('field_id')
def get_expensive_data(self, field_id):
# expensive computation
return compute(field_id)
This step reduces repeated database or API calls for the same arguments within a single server process.
Tune Worker Memory and Garbage Collection
Moreover, you can tweak Python’s garbage collector to free memory during long‑running requests. In your odoo.conf
, add:
[options]
gc_interval = 1000 # number of requests between GC runs
This value ensures that Odoo triggers Python’s gc.collect()
periodically to avoid memory bloat.
Use Asynchronous Job Queue Instead of Cron
Install OCA Queue Job Module
First, you will install the queue_job module from the OCA repository. This module provides a job queue that runs tasks asynchronously, offloading heavy operations from the main cron scheduler.
Replace Heavy Cron with Queue Jobs
Then, you will convert your cron definition:
<record id="ir_cron_my_task" model="ir.cron">
<field name="name">Daily Import</field>
<field name="model_id" ref="model_my_import"/>
<field name="state">code</field>
<field name="code">model.execute_mass_import()</field>
<field name="interval_number">1</field>
<field name="interval_type">days</field>
</record>
to:
from odoo.addons.queue_job.job import job, identity_key
class MyImport(models.TransientModel):
_name = 'my.import'
@job
@identity_key(lambda self, record_ids: tuple(record_ids))
def enqueue_mass_import(self, record_ids):
return self.execute_mass_import(record_ids)
This approach pushes tasks into a worker queue and runs them in parallel, which improves overall responsiveness.
Monitor and Maintain Performance
Set Up Automated Profiling
After applying optimizations, you will automate profiling regularly. Use a CI/CD pipeline to run a performance test that records profiling data for critical flows. For example, add a GitHub Action that runs:
- name: Run Odoo Performance Test
run: |
odoo shell -d my_db --command "profile.enable(); model.some_flow(); profile.disable(); profile.dump_stats('perf.stats')"
- uses: jiffyclub/snakeviz-action@v1
with:
path: perf.stats
This setup alerts you if performance regresses.
Integrate with Monitoring Tools
Furthermore, you will forward timing metrics to an APM like New Relic or Datadog. You can install the Python agent and configure Odoo to emit metrics for ORM calls, HTTP routes, and workers.
[options]
enable_newrelic = True
newrelic_config_file = /etc/odoo/newrelic.ini
This integration provides continuous visibility into module performance in production.
Conclusion and Next Steps
In this tutorial, you have learned eight practical strategies to optimize module performance in Odoo. You used browser Developer Tools and network logs to profile frontend delays. You applied Python profiling with Odoo Shell and cProfile
. You inspected slow database statements via pg_stat_statements
and added strategic indexes. You reduced ORM overhead by prefetching and filtering at the SQL level. You leveraged cache decorators and tuned garbage collection. You replaced heavy cron tasks with asynchronous queue jobs. You set up automated profiling and integrated with APM solutions for ongoing monitoring.
By following these steps, you will significantly boost the speed and responsiveness of your custom modules. Now, you can deliver faster ERP solutions, improve user satisfaction, and scale your Odoo instance with confidence.Executed 3rd Code Block
Discover more from teguhteja.id
Subscribe to get the latest posts sent to your email.