Optimizing Systems with Tuned Profiles and Process Scheduling
College-Level Course Module | RHEL System Administration
Learning Objectives
1
Understand system performance concepts
CPU scheduling, kernel parameters, and tunable settings
2
Use tuned to apply performance profiles
Select and activate workload-specific tuning profiles
3
Adjust process priority with nice and renice
Control CPU scheduling priority for individual processes
4
Monitor and verify performance changes
Use tools to observe the effects of tuning adjustments
Performance Concepts
Performance tuning optimizes system behavior for specific workloads by adjusting kernel parameters, CPU scheduling, I/O behavior, and power management.
CPU Scheduling
How the kernel decides which processes run and for how long. Affects responsiveness vs throughput.
Kernel Parameters
Adjustable values in /proc/sys that control kernel behavior: memory, networking, I/O.
I/O Scheduling
How disk operations are ordered and prioritized. Different schedulers for HDD vs SSD.
Power Management
CPU frequency scaling, sleep states. Trade power consumption for performance.
No free lunch: Tuning involves tradeoffs. Optimizing for throughput may hurt latency. Optimizing for performance increases power consumption.
Introduction to tuned
tuned is a daemon that monitors your system and dynamically applies tuning adjustments. It provides pre-configured profiles for common workloads.
# Check if tuned is installed and running[root@server ~]# systemctl status tuned
● tuned.service - Dynamic System Tuning Daemon
Loaded: loaded (/usr/lib/systemd/system/tuned.service; enabled)
Active: active (running) since Mon 2024-01-20 10:00:00 EST# Install tuned if not present[root@server ~]# dnf install tuned
# Enable and start tuned[root@server ~]# systemctl enable --now tuned
# Check current active profile[root@server ~]# tuned-adm active
Current active profile: virtual-guest
Pre-installed: tuned is installed and enabled by default on RHEL. It automatically selects a profile based on detected hardware and virtualization.
Available Profiles
# List all available profiles[root@server ~]# tuned-adm list
Available profiles:
- accelerator-performance - Performance for NVIDIA GPUs
- balanced - Balance performance and power
- desktop - Desktop system optimization
- hpc-compute - High-performance computing
- intel-sst - Intel Speed Select Technology
- latency-performance - Low latency performance
- network-latency - Low latency network tuning
- network-throughput - High throughput networking
- optimize-serial-console - Serial console optimization
- powersave - Maximum power saving
- throughput-performance - High throughput performance
- virtual-guest - Virtual machine guest
- virtual-host - Virtual machine host
Current active profile: balanced# Get recommendation based on system[root@server ~]# tuned-adm recommend
virtual-guest
Common Profiles
balanced
Default compromise between performance and power. Good starting point for general use.
throughput-performance
Maximum throughput. Disables power saving, aggressive I/O. For servers and batch jobs.
latency-performance
Minimum latency. CPU always at max frequency, minimal delays. For real-time apps.
powersave
Minimum power use. Aggressive power saving, reduced performance. For laptops.
virtual-guest
Optimized for VMs. Reduces overhead, works with hypervisor. For cloud instances.
desktop
Interactive responsiveness. Prioritizes foreground apps. For workstations.
# Switch to a different profile[root@server ~]# tuned-adm profile throughput-performance
No current profile.# (or shows previous profile)# Verify the change[root@server ~]# tuned-adm active
Current active profile: throughput-performance# Check what the profile actually does[root@server ~]# tuned-adm profile_info throughput-performance
Profile name:
throughput-performance
Profile summary:
Broadly applicable tuning that provides excellent performance
across a variety of common server workloads.
Profile description:
...# Turn off tuned (restore defaults)[root@server ~]# tuned-adm off
# Reactivate with recommended profile[root@server ~]# tuned-adm profile $(tuned-adm recommend)
Immediate effect: Profile changes take effect immediately. No reboot required. Settings persist across reboots.
Profiles are layered: Many profiles inherit from others. throughput-performance extends balanced with throughput-specific overrides.
Custom Profiles
# Create directory for custom profile[root@server ~]# mkdir /etc/tuned/my-web-server
# Create custom profile based on throughput-performance[root@server ~]# cat > /etc/tuned/my-web-server/tuned.conf << 'EOF'
[main]
summary=Custom profile for web servers
include=throughput-performance
[sysctl]
# Increase network buffers for high-traffic web server
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_wmem=4096 65536 16777216
# More file handles for many connections
fs.file-max=2097152
[vm]
# Transparent huge pages for better memory performance
transparent_hugepages=always
EOF
# Activate custom profile[root@server ~]# tuned-adm profile my-web-server
# Verify[root@server ~]# tuned-adm active
Current active profile: my-web-server
Location matters: Custom profiles go in /etc/tuned/ (survives updates). System profiles are in /usr/lib/tuned/ (overwritten on updates).
Process Scheduling
The Linux scheduler decides which processes run on which CPUs and for how long. Priority influences these decisions - higher priority processes get more CPU time.
Linux Scheduling Classes (highest to lowest priority)
SCHED_FIFOSCHED_RR← Real-time (priority 1-99)
SCHED_OTHER← Normal processes (nice -20 to +19)
SCHED_BATCHSCHED_IDLE← Background tasks
For most tasks: You'll use nice and renice to adjust priority within SCHED_OTHER. Real-time scheduling requires special privileges and care.
Understanding Nice Values
Nice Value Scale
-20
Highest Priority
-10
High Priority
0
Normal (Default)
+10
Low Priority
+19
Lowest Priority
# View nice values of running processes[student@server ~]$ ps -el | head
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 1 0 0 80 0 - 43812 ep_pol ? 00:00:02 systemd
1 S 0 2 0 0 80 0 - 0 kthrea ? 00:00:00 kthreadd# View with top (NI column)[student@server ~]$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1234 student 20 0 225832 4608 3584 R 95.0 0.1 5:23.45 compile
5678 mysql 20 0 1256320 524288 16384 S 45.0 12.8 2:15.67 mysqld
The name "nice": A process with high nice value is being "nice" to others by accepting lower priority. Low nice = not nice, demanding more CPU.
Using nice
# Start a process with higher nice value (lower priority)[student@server ~]$ nice -n 10 ./long_running_script.sh
# Process runs at nice 10 - lower priority than default# Start at nice 19 (lowest priority)[student@server ~]$ nice -n 19 find / -name "*.log" 2>/dev/null
# Default nice command adds 10[student@server ~]$ nice ./backup.sh # Runs at nice 10# Start with higher priority (lower nice value) - requires root[root@server ~]# nice -n -10 /opt/critical-app/start.sh
# Process runs at nice -10 - higher priority than default# Verify the nice value[student@server ~]$ nice -n 15 bash -c 'ps -o pid,ni,comm -p $$'
PID NI COMMAND
12345 15 bash# Nice value is inherited by child processes[student@server ~]$ nice -n 10 bash # Shell and all commands in it run at nice 10
Requires root: Only root can set negative nice values (higher priority). Regular users can only lower their priority (increase nice value).
Using renice
# Change nice value of running process by PID[root@server ~]# renice -n 10 -p 1234
1234 (process ID) old priority 0, new priority 10# Lower nice value (increase priority) - requires root[root@server ~]# renice -n -5 -p 5678
5678 (process ID) old priority 0, new priority -5# Change nice for all processes owned by a user[root@server ~]# renice -n 5 -u student
1000 (user ID) old priority 0, new priority 5# Change nice for all processes in a process group[root@server ~]# renice -n 10 -g 1234
# Users can only increase their own processes' nice value[student@server ~]$ renice -n 15 -p 1234 # OK if student owns PID 1234[student@server ~]$ renice -n -5 -p 1234 # FAILS - can't lower nice without rootrenice: failed to set priority for 1234 (process ID): Permission denied# Find PID and renice in one step[root@server ~]# renice -n 10 -p $(pgrep -f backup.sh)
Viewing Priorities
# ps with nice values[student@server ~]$ ps -eo pid,ni,pri,comm --sort=-ni | head
PID NI PRI COMMAND
1234 19 0 backup
5678 15 5 indexer
9012 10 10 batch_job
1 0 20 systemd
567 -5 25 database# top - interactive view (press 'r' to renice)[student@server ~]$ top
top - 14:30:00 up 5 days, 3:22, 2 users, load average: 2.50, 2.10, 1.95
Tasks: 215 total, 3 running, 212 sleeping, 0 stopped, 0 zombie
%Cpu(s): 45.2 us, 5.1 sy, 0.0 ni, 49.0 id, 0.3 wa, 0.2 hi, 0.2 si
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1234 student 39 19 225832 4608 3584 R 25.0 0.1 5:23.45 backup
5678 mysql 15 -5 1256320 524288 16384 S 35.0 12.8 2:15.67 mysqld# htop - more interactive (easier renice with F7/F8)[root@server ~]# dnf install htop
[root@server ~]# htop
top shortcuts: Press r to renice a process. Enter PID, then new nice value. Press k to kill. Press h for help.
Practical Examples
# Scenario 1: Run backup without impacting production[root@server ~]# nice -n 19 tar -czvf /backup/full-backup.tar.gz /data/
# Scenario 2: Database competing with batch jobs - prioritize database[root@server ~]# renice -n -10 -p $(pgrep mysqld)
[root@server ~]# renice -n 15 -p $(pgrep batch_processor)
# Scenario 3: User's compile job consuming too much CPU[root@server ~]# renice -n 10 -u developer
# Scenario 4: Start development shell at low priority[developer@server ~]$ nice -n 10 bash
[developer@server ~]$ make -j8 # Compile at nice 10# Scenario 5: Critical monitoring must always run[root@server ~]# nice -n -15 /opt/monitoring/agent start
# Scenario 6: Bulk indexing job in Elasticsearch (don't impact searches)[root@server ~]# nice -n 10 /usr/share/elasticsearch/bin/reindex.sh
Pattern: Background/batch tasks get nice 10-19. Critical services get nice -5 to -15. Normal services stay at 0.
Real-time with chrt
chrt changes the scheduling policy and priority of processes. Real-time scheduling (SCHED_FIFO, SCHED_RR) is for time-critical applications.
# View current scheduling policy and priority[student@server ~]$ chrt -p 1234
pid 1234's current scheduling policy: SCHED_OTHER
pid 1234's current scheduling priority: 0# Run process with SCHED_FIFO (first-in, first-out real-time)[root@server ~]# chrt -f 50 /opt/realtime-app/start
# Run with SCHED_RR (round-robin real-time)[root@server ~]# chrt -r 50 ./time-critical-task
# Change running process to real-time[root@server ~]# chrt -f -p 50 1234
# Set to SCHED_BATCH (CPU-intensive batch jobs)[root@server ~]# chrt -b 0 ./batch_processor
# Set to SCHED_IDLE (only when system is idle)[root@server ~]# chrt -i 0 ./background_indexer
⚠ Caution: Real-time processes can lock up your system if they don't yield the CPU. Use carefully and test thoroughly. Requires root.
Monitoring Performance
# CPU and system overview[student@server ~]$ top
[student@server ~]$ htop
# System resource summary[student@server ~]$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 0 524288 65536 1048576 0 0 50 100 500 1000 45 5 49 1 0# I/O statistics[student@server ~]$ iostat -xz 1
Device r/s w/s rkB/s wkB/s %util
sda 10.5 25.3 420.0 1012.0 15.2# Check current CPU governor[student@server ~]$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
performance# Verify tuned settings were applied[root@server ~]# sysctl vm.swappiness
vm.swappiness = 10# Check tuned's actual changes[root@server ~]# tuned-adm verify
Verification succeeded, current system settings match the preset profile.
Combined Approach
Use tuned for system-wide optimization and nice/renice for process-specific adjustments. They work together, not instead of each other.
# tuned not applying settings?[root@server ~]# systemctl status tuned
[root@server ~]# tuned-adm verify
Verification failed, current system settings differ from the preset profile.
We checked the following:# Check for conflicts with other tools[root@server ~]# systemctl status power-profiles-daemon # May conflict# Process not respecting nice value?[student@server ~]$ ps -eo pid,ni,comm -p 1234
# Check if process is I/O bound (nice only affects CPU)# System still slow after tuning?[student@server ~]$ vmstat 1
# High 'b' column = processes blocked on I/O
# High 'wa' = CPU waiting on I/O (not a CPU problem)
# High 'si'/'so' = swap activity (memory problem)# Identify the bottleneck[student@server ~]$ top # CPU bound?[student@server ~]$ iostat -xz 1 # Disk I/O bound?[student@server ~]$ free -h # Memory bound?[student@server ~]$ ss -s # Network connections?
Nice only affects CPU: If a process is waiting for disk I/O or network, changing its nice value won't help. Identify the actual bottleneck first.
Best Practices
✓ Do
Choose tuned profile based on workload
Monitor before and after tuning
Use nice for background/batch jobs
Document profile choices and reasons
Test changes in non-production first
Verify settings actually applied
Start conservative, adjust gradually
Identify bottleneck before tuning
✗ Don't
Tune without measuring
Use real-time scheduling casually
Give everything high priority
Ignore the actual bottleneck
Assume one profile fits all workloads
Modify /usr/lib/tuned/ (use /etc/tuned/)
Forget nice only affects CPU scheduling
Skip verification after changes
Measure first: Before tuning, establish baseline metrics. After tuning, measure again. Without data, you can't know if changes helped.
Key Takeaways
1
tuned: System-wide profiles. tuned-adm profile NAME to apply. tuned-adm list to see options.
2
Profiles: balanced (default), throughput-performance, latency-performance, virtual-guest, powersave. Match to workload.
3
nice/renice: Values -20 to +19 (lower = higher priority). nice -n 10 cmd to start, renice -n 10 -p PID to change.
4
Verify: Use top, vmstat, tuned-adm verify. Measure before and after. Identify bottleneck first.
LAB EXERCISES
List available tuned profiles and check the active profile
Apply the throughput-performance profile and verify settings
Start a process with nice value 15, verify with ps
Use renice to change a running process's priority
Create a custom tuned profile inheriting from balanced
Monitor system with top and vmstat while changing profiles