Stress Test CPUs on Linux machines

I came across a scenario where I needed to stress test some new servers CPU and didn’t want to install any program to do it so I wrote a script to stress test CPU’s.

Features:

  • Customizable duration: Set any positive integer duration with -t.
  • Customizable core count: Stress a specific number of cores with -c, up to the maximum available.
  • Quiet mode: Use -q for minimal output, ideal for scripting or automated runs.
  • Input validation: Checks for valid duration and core count, with clear error messages.
  • Verbose reporting: Logs start/end times, core count, PIDs, and a countdown timer (unless in quiet mode).
  • Reliable cleanup: Stores PIDs and uses kill -9 to ensure yes processes are terminated.
  • Cleanup verification: Checks for lingering yes processes and warns if any remain.

Options:

  • -t duration: Set duration in seconds (default: 30).
  • -c cores: Specify number of CPU cores to stress (default: all available cores, as detected by nproc).
  • -q: Quiet mode, suppresses detailed output (only shows warnings or errors).

How to use:

  1. Save as max_cpu.sh.
  2. Make executable: chmod +x max_cpu.sh.
  3. Run with options, e.g.:
    • ./max_cpu.sh (default: 30 seconds, all CPU cores, verbose output)
    • ./max_cpu.sh -t 60 (run for 60 seconds)
    • ./max_cpu.sh -t 45 -c 2 (run for 45 seconds, stress 2 cores)
    • ./max_cpu.sh -t 20 -q (run for 20 seconds, minimal output)

Example output (verbose mode, ./max_cpu.sh -t 5 -c 2):

Starting CPU stress test at Fri May 30 22:25:47 NZST 2025
Using 2 CPU core(s) for 5 seconds
Launching 'yes' processes to max out CPU...
Started 'yes' process with PID 12345
Started 'yes' process with PID 12346
Running CPU at 100% for 5 seconds...
Time remaining: 5 seconds
Time remaining: 4 seconds
Time remaining: 3 seconds
Time remaining: 2 seconds
Time remaining: 1 seconds
Stopping 'yes' processes...
Terminated process with PID 12345
Terminated process with PID 12346
All 'yes' processes terminated successfully
CPU stress test completed at Fri May 30 22:25:52 NZST 2025

Example output (quiet mode, ./max_cpu.sh -t 5 -q):

All 'yes' processes terminated successfully

Script:

#!/bin/bash

# Default values
DURATION=30
CORES=$(nproc)
VERBOSE=1  # 1 for verbose output, 0 for minimal

# Function to display usage
usage() {
    echo "Usage: $0 [-t duration] [-c cores] [-q]"
    echo "  -t duration: Duration in seconds (default: 30)"
    echo "  -c cores: Number of CPU cores to stress (default: all, $CORES)"
    echo "  -q: Quiet mode (minimal output)"
    exit 1
}

# Parse command-line options
while getopts "t:c:q" opt; do
    case $opt in
        t) DURATION=$OPTARG ;;
        c) CORES=$OPTARG ;;
        q) VERBOSE=0 ;;
        *) usage ;;
    esac
done

# Validate inputs
if ! [[ "$DURATION" =~ ^[0-9]+$ ]] || [ "$DURATION" -le 0 ]; then
    echo "Error: Duration must be a positive integer"
    usage
fi
if ! [[ "$CORES" =~ ^[0-9]+$ ]] || [ "$CORES" -le 0 ] || [ "$CORES" -gt "$(nproc)" ]; then
    echo "Error: Number of cores must be a positive integer up to $(nproc)"
    usage
fi

# Function to log messages (respects verbose mode)
log() {
    if [ "$VERBOSE" -eq 1 ]; then
        echo "$1"
    fi
}

log "Starting CPU stress test at $(date)"
log "Using $CORES CPU core(s) for $DURATION seconds"

# Array to store process IDs
pids=()

# Start 'yes' processes for specified number of cores
log "Launching 'yes' processes to max out CPU..."
for i in $(seq $CORES); do
    yes > /dev/null &
    pids+=($!)
    log "Started 'yes' process with PID ${pids[$i-1]}"
done

# Wait for specified duration, showing progress if verbose
if [ "$VERBOSE" -eq 1 ]; then
    log "Running CPU at 100% for $DURATION seconds..."
    for ((i=$DURATION; i>0; i--)); do
        echo "Time remaining: $i seconds"
        sleep 1
    done
else
    sleep $DURATION
fi

# Terminate all 'yes' processes
log "Stopping 'yes' processes..."
for pid in "${pids[@]}"; do
    kill -9 "$pid" 2>/dev/null
    log "Terminated process with PID $pid"
done

# Wait briefly to ensure cleanup
wait 2>/dev/null

# Verify no 'yes' processes remain
if ! pgrep -x "yes" > /dev/null; then
    log "All 'yes' processes terminated successfully"
else
    echo "Warning: Some 'yes' processes may still be running. Run 'sudo killall -9 yes' manually."
fi

log "CPU stress test completed at $(date)"

 

Backup Docker Container configuration script

A while ago I looked into a good way to backup Docker configurations for each container.  This is useful if you have many containers and want a backup strategy that includes container settings.

The below script is from https://github.com/007revad/Docker_Autocompose and does rely on the image red5d/docker-autocompose which the script will download if not already present on your Docker server.

Save the following script as docker-autocompose.sh and run with the commands shown at the top.

#!/usr/bin/env sh
#--------------------------------------------------------------------------------------
# A script to create docker-compose.yml files from docker containers.
#
# Script can be run with a container name parameter to only process that container:
# sudo -s docker-autocompose.sh plex
#
# Or with no parameter, or the "all" parameter, to process all containers:
# sudo -s docker-autocompose.sh all
#
# https://github.com/007revad/Docker_Autocompose
# Adapted from: https://www.synoforum.com/threads/docker-autocompose.4644/#post-20341
#--------------------------------------------------------------------------------------
# REQUIRED:
#
# Needs Red5d/docker-autocompose installed in docker.
# Red5d/docker-autocompose should not be started in docker.
#--------------------------------------------------------------------------------------

# Set the path where you want to .yml files saved to. If blank will save in your home.
saveto="/opt/backup-docker"

# Set to yes to include hostname in the folder name.
# Handy if you have multiple devices that backup to the same location.
IncludeHostname=yes

# Set to yes to backup all containers (running or stopped), or no to backup only running containers.
BackupAllContainers=yes

#--------------------------------------------------------------------------------------

autocompose="red5d/docker-autocompose"

# Check script is running as root (or docker.sock won't run)
if [ $( whoami ) != "root" ]; then
    echo "Script needs to run as root. Aborting."
    exit 1
fi

# Check our saveto path exists (if saveto is set)
if [ ! -d "${saveto}" ]; then
    echo "saveto path not found. Files will be saved in your home."
    saveto=
fi 

# Get hostname if IncludeHostname is yes
echo "IncludeHostname is set to: '${IncludeHostname}'"  # Debugging
if [ "${IncludeHostname}" = "yes" ]; then
    host="$(hostname)_"
fi

# Create subfolder with date
if [ -d "${saveto}" ]; then
    Now=$( date '+%Y%m%d')
    if mkdir -p "${saveto}/${Now}_${host}docker-autocompose"; then
        path="${saveto}/${Now}_${host}docker-autocompose/"
    fi
fi

# Function to process a single container
process_container() {
    container_name="${1}"
    container_id=$(docker container ls -a -q --filter name="${container_name}")
    
    # Check if container exists
    if [ -z "${container_id}" ]; then
        echo "Container '${container_name}' not found. Skipping."
        return
    fi

    # Skip non-running containers if BackupAllContainers=no
    if [ "${BackupAllContainers}" != "yes" ]; then
        is_running=$(docker container inspect -f '{{.State.Running}}' "${container_id}")
        if [ "${is_running}" != "true" ]; then
            echo "Container '${container_name}' is not running. Skipping (BackupAllContainers=no)."
            return
        fi
    fi

    # Check if container is running
    is_running=$(docker container inspect -f '{{.State.Running}}' "${container_id}")
    
    if [ "${is_running}" != "true" ] && [ "${BackupAllContainers}" = "yes" ]; then
        echo "Starting container '${container_name}' temporarily..."
        if docker start "${container_id}" > /dev/null 2>&1; then
            # Generate docker-compose.yml
            docker run --rm -v /var/run/docker.sock:/var/run/docker.sock "${autocompose}" "${container_id}" > "${path}${container_name}-compose.yml"
            # Stop the container after generating the file
            docker stop "${container_id}" > /dev/null 2>&1
            echo "Generated docker-compose.yml for '${container_name}' and stopped it."
        else
            echo "Failed to start container '${container_name}'. Skipping."
            return
        fi
    else
        # Container is already running, generate docker-compose.yml directly
        docker run --rm -v /var/run/docker.sock:/var/run/docker.sock "${autocompose}" "${container_id}" > "${path}${container_name}-compose.yml"
        echo "Generated docker-compose.yml for running container '${container_name}'."
    fi
}

# Do the magic
case "${1}" in
    all|"")
        # Create a docker-compose.yml file for each container
        # Clear existing arguments
        while [ "${1}" ]; do
            shift
        done
        # Create array of container names based on BackupAllContainers setting
        if [ "${BackupAllContainers}" = "yes" ]; then
            set -- $(docker container ls -a --format '{{.Names}}')
            echo "Backing up all containers (running and stopped)."
        else
            set -- $(docker ps --format '{{.Names}}')
            echo "Backing up only running containers."
        fi
        while [ "${1}" ]; do
            process_container "${1}"
            shift
        done
        ;;
    *)
        # Only process specified container
        process_container "${1}"
        ;;
esac

echo "All done"
exit 0

 

Remove about author from WordPress without plugin

if you have a WordPress site and have search engines finding <yoursite>/author/admin or <yoursite>/author/a<name of author> and you don’t want this without adding another plugin (like Yoast SEO) then the following steps will redirect this page to your homepage

In WordPress admin console, navigate to Appearance > Theme File Editor.  Once there, select functions.php from the right hand side.

Using the editor, add the following as a new line at the end of the file –

function my_custom_disable_author_page() {
  global $wp_query;

  if ( is_author() ) {
      // Redirect to homepage, set status to 301 permenant redirect. 
      // Function defaults to 302 temporary redirect. 
      wp_redirect(get_option('home'), 301); 
      exit; 
  }
}

Save the file and now your site wont publish such pages.

 

Python Script for Analyzing Media File Audio Languages

This Python script scans a specified directory for media files, analyzes their audio streams using ffprobe (part of FFmpeg), and generates a report categorizing files based on their audio language. It separates files with non-English audio, other language audio (or no audio), and provides a detailed report of all audio streams. The script is designed to run on Debian 12.
Key Features
  • Comprehensive Language Detection: Identifies English (‘eng’), specific non-English languages, and undefined (‘und’) audio streams.
  • Recursive Scanning: Processes all media files in the specified directory and its subdirectories.
  • Detailed Reporting: Provides both a summary of non-English/undefined files and a detailed breakdown of all audio streams.
  • Robust Error Handling: Skips problematic files and continues processing, with clear error messages.
  • Customizable: Media file extensions can be modified in the media_extensions set.
Script Summary:
  • Uses ffprobe to analyze media file streams
  • Supports common video formats (.mp4, .mkv, .avi, .mov, .wmv, .flv, .m4v)
  • Recursively scans all subdirectories
  • Creates a text file (no_english_audio.txt) with results
  • Handles errors gracefully
  • Prompts for directory path (defaults to current directory if Enter is pressed)
The output file will contain:
  • A list of file paths for media files without English audio streams
  • Or a message indicating all files have English audio if none are found without it
Install Dependencies
First, install the required dependencies:
sudo apt update
sudo apt install ffmpeg python3
Save the script to a file (e.g., check_audio.py)
Script:
#!/usr/bin/env python3

import os
import subprocess
import json
import sys
from pathlib import Path

def get_audio_streams(file_path):
    """
    Get detailed information about audio streams in a media file
    Returns list of dictionaries containing stream info
    """
    try:
        cmd = [
            'ffprobe',
            '-v', 'error',
            '-show_streams',
            '-print_format', 'json',
            str(file_path)
        ]
        result = subprocess.run(cmd, capture_output=True, text=True, check=True)
        
        data = json.loads(result.stdout)
        audio_streams = []
        
        for stream in data.get('streams', []):
            if stream.get('codec_type') == 'audio':
                stream_info = {
                    'index': stream.get('index', 'unknown'),
                    'codec': stream.get('codec_name', 'unknown'),
                    'language': stream.get('tags', {}).get('language', 'und'),
                    'channels': stream.get('channels', 'unknown')
                }
                audio_streams.append(stream_info)
        
        return audio_streams
    
    except (subprocess.CalledProcessError, json.JSONDecodeError) as e:
        print(f"Error processing {file_path}: {e}")
        return []

def scan_directory(directory_path, output_file):
    """
    Scan directory for media files and analyze audio stream languages
    """
    media_extensions = {'.mp4', '.mkv', '.avi', '.mov', '.wmv', '.flv', '.m4v'}
    directory = Path(directory_path)
    
    if not directory.is_dir():
        print(f"Error: {directory_path} is not a valid directory")
        return
    
    # Lists for results
    no_english_files = []
    undefined_lang_files = []
    detailed_report = []
    
    # Scan directory
    for file_path in directory.rglob('*'):
        if file_path.is_file() and file_path.suffix.lower() in media_extensions:
            print(f"Analyzing: {file_path}")
            audio_streams = get_audio_streams(file_path)
            
            # Build detailed report entry
            file_entry = f"File: {file_path}\n"
            if audio_streams:
                file_entry += f"  Found {len(audio_streams)} audio stream(s):\n"
                has_english = False
                all_undefined = True
                
                for stream in audio_streams:
                    lang = stream['language'].lower()
                    file_entry += f"    Stream {stream['index']}: {lang} ({stream['codec']}, {stream['channels']} channels)\n"
                    if lang == 'eng':
                        has_english = True
                    if lang != 'und':
                        all_undefined = False
                
                # Categorize the file
                if not has_english:
                    if all_undefined:
                        undefined_lang_files.append(str(file_path))
                    else:
                        no_english_files.append(str(file_path))
            else:
                file_entry += "  No audio streams found\n"
                undefined_lang_files.append(str(file_path))  # Treat no audio as undefined
            
            detailed_report.append(file_entry)
    
    # Write results
    try:
        with open(output_file, 'w') as f:
            # Summary of files without English (specific non-English languages)
            f.write("=== Files With Non-English Audio (Excluding Undefined) ===\n")
            if no_english_files:
                f.write(f"Found {len(no_english_files)} file(s) with specific non-English audio:\n")
                f.write("\n".join(no_english_files))
                f.write("\n\n")
            else:
                f.write("No files found with specific non-English audio.\n\n")
            
            # Summary of files with undefined language
            f.write("=== Files With Undefined Language Audio (or No Audio) ===\n")
            if undefined_lang_files:
                f.write(f"Found {len(undefined_lang_files)} file(s) with undefined language audio:\n")
                f.write("\n".join(undefined_lang_files))
                f.write("\n\n")
            else:
                f.write("No files found with undefined language audio.\n\n")
            
            # Detailed report
            f.write("=== Detailed Audio Stream Report ===\n")
            f.write("\n".join(detailed_report))
        
        print(f"Results written to {output_file}")
    except IOError as e:
        print(f"Error writing to output file: {e}")

def main():
    if len(sys.argv) != 2:
        print("Usage: ./check_audio.py <directory_path>")
        print("Example: ./check_audio.py /path/to/media")
        sys.exit(1)
    
    directory_path = sys.argv[1]
    output_file = "audio_language_report.txt"
    
    try:
        subprocess.run(['ffprobe', '-version'], capture_output=True, check=True)
    except (subprocess.CalledProcessError, FileNotFoundError):
        print("Error: FFmpeg is not installed. Please install it using 'sudo apt install ffmpeg'")
        sys.exit(1)
    
    scan_directory(directory_path, output_file)

if __name__ == "__main__":
    main()
Make the script executable:
chmod +x check_audio.py
Run the script:
./check_audio.py /path/to/media
Example output in audio_language_report.txt:
=== Files With Non-English Audio (Excluding Undefined) ===
Found 1 file(s) with specific non-English audio:
/path/to/video1.mkv

=== Files With Undefined Language Audio (or No Audio) ===
Found 2 file(s) with undefined language audio:
/path/to/video2.mp4
/path/to/video4.avi

=== Detailed Audio Stream Report ===
File: /path/to/video1.mkv
  Found 1 audio stream(s):
    Stream 1: spa (aac, 2 channels)

File: /path/to/video2.mp4
  Found 1 audio stream(s):
    Stream 1: und (mp3, 2 channels)

File: /path/to/video3.mkv
  Found 2 audio stream(s):
    Stream 1: eng (aac, 6 channels)
    Stream 2: jpn (aac, 2 channels)

File: /path/to/video4.avi
  Found 0 audio stream(s):
    No audio streams found
Step-by-Step Breakdown
Script Initialization and Dependencies
  • The script uses Python 3, which is included with Debian 12.
  • Requires FFmpeg (ffprobe) to analyze media files. Install it with:
sudo apt update
sudo apt install ffmpeg
  • Imports necessary Python modules: os, subprocess, json, sys, and pathlib.Path.
  • The script is executed with a command-line argument specifying the directory to scan.
Command-Line Argument Handling
  • The script expects a single command-line argument: the path to the directory to scan.
  • Usage example:
./check_audio.py /path/to/media
  • If no or incorrect arguments are provided, it displays usage instructions and exits:
Usage: ./check_audio.py <directory_path>
Example: ./check_audio.py /path/to/media
  • The output report is saved to a file named audio_language_report.txt.
FFmpeg Availability Check
  • Verifies that ffprobe is installed by running:
ffprobe -version
  • If FFmpeg is not installed, the script exits with an error message instructing the user to install it.
Audio Stream Analysis (get_audio_streams Function)
  • Uses ffprobe to extract stream information from a media file in JSON format.
  • Command executed:
ffprobe -v error -show_streams -print_format json <file_path>
  • Parses the JSON output to identify audio streams.
  • For each audio stream, collects:

    • Stream index
    • Codec name (e.g., aac, mp3)
    • Language tag (defaults to ‘und’ if undefined)
    • Number of channels
  • Returns a list of dictionaries containing stream details or an empty list if an error occurs (e.g., file corruption or invalid format).
Directory Scanning (scan_directory Function)
  • Accepts the directory path and output file name as parameters.
  • Supports common media file extensions: .mp4, .mkv, .avi, .mov, .wmv, .flv, .m4v.
  • Recursively scans the directory using Path.rglob to find all media files.
  • For each file:
    • Calls get_audio_streams to retrieve audio stream details.
    • Builds a detailed report entry listing all audio streams, including their language, codec, and channels.
    • Categorizes the file based on its audio streams:
      • Files with English audio: If any stream has language ‘eng’, the file is excluded from summary lists.
      • Files with non-English audio: If no ‘eng’ stream exists and at least one stream has a specific language (e.g., ‘spa’, ‘fre’), the file is added to no_english_files.
      • Files with undefined language or no audio: If all streams are ‘und’ (undefined) or no audio streams exist, the file is added to undefined_lang_files.
Output Generation
Writes results to audio_language_report.txt in three sections:
  • Files With Non-English Audio (Excluding Undefined):
    • Lists files with specific non-English languages (e.g., Spanish, French).
    • Example:
Found 1 file(s) with specific non-English audio:
/path/to/video1.mkv
  • Files With Undefined Language Audio (or No Audio)
    • Lists files with only ‘und’ language tags or no audio streams.
    • Example:
Found 2 file(s) with undefined language audio:
/path/to/video2.mp4
/path/to/video4.avi
  • Detailed Audio Stream Report:
    • Lists all files with their audio stream details.
    • Example:
File: /path/to/video1.mkv
  Found 1 audio stream(s):
    Stream 1: spa (aac, 2 channels)
File: /path/to/video2.mp4
  Found 1 audio stream(s):
    Stream 1: und (mp3, 2 channels)
File: /path/to/video3.mkv
  Found 2 audio stream(s):
    Stream 1: eng (aac, 6 channels)
    Stream 2: jpn (aac, 2 channels)
File: /path/to/video4.avi
  Found 0 audio stream(s):
    No audio streams found
  • Handles IO errors gracefully, printing an error message if the output file cannot be written.
Error Handling
  • Checks for valid directory input; exits if the directory is invalid.
  • Handles ffprobe errors (e.g., corrupted files) by skipping problematic files and logging errors.
  • Manages JSON parsing errors, ensuring the script continues processing other files.

Python Script to Scan for Low-Bitrate MP3 Files

This Python script recursively scans a specified directory for MP3 files with a bitrate less than 320 kbps, lists the containing folders, and uses multiprocessing for speed and a progress bar for user feedback.
Below is a step-by-step breakdown of how the script works, suitable for users with basic Python knowledge.
Script Summary
  • Purpose: Identify folders containing MP3 files with bitrates below 320 kbps.
  • Features: Multiprocessing for faster scanning, progress bar for real-time feedback, initial file counting for accurate progress tracking.
  • Requirements: Python 3.x, mutagen (pip install mutagen), tqdm (pip install tqdm).
Usage Instructions
Install Dependencies
In a virtual environment:
source myenv/bin/activate
pip install mutagen tqdm
Or for user-level installation, use:
pip install --user mutagen tqdm
Or via system package manager (Ubuntu/Debian):
sudo apt install python3-mutagen python3-tqdm
Save and Run the Script:
  • Save the script as scan_mp3.py.

Script:

import os
from mutagen.mp3 import MP3
from pathlib import Path
from tqdm import tqdm
import multiprocessing
from concurrent.futures import ProcessPoolExecutor, as_completed

def process_mp3_file(file_path):
    """
    Process a single MP3 file and return its parent folder if bitrate < 320 kbps.
    """
    try:
        audio = MP3(file_path)
        if audio.info.bitrate < 320000:  # Bitrate in bits/sec
            return str(file_path.parent)
        return None
    except Exception:
        return None

def scan_mp3_bitrate(directory, output_file):
    """
    Recursively scan a directory for MP3 files with bitrate < 320 kbps using multiprocessing,
    with initial file counting for accurate progress bar, and save results to a file.
    """
    low_bitrate_folders = set()
    
    # Count MP3 files for progress bar
    mp3_files = []
    print("Counting MP3 files for progress tracking...")
    for root, _, files in os.walk(directory):
        for file in files:
            if file.lower().endswith('.mp3'):
                mp3_files.append(Path(root) / file)
    
    total_files = len(mp3_files)
    if total_files == 0:
        print("\nNo MP3 files found in the directory.")
        with open(output_file, 'w', encoding='utf-8') as f:
            f.write("No MP3 files found in the directory.\n")
        return
    
    # Scan MP3 files with multiprocessing and progress bar
    print(f"\nScanning {total_files} MP3 files using {multiprocessing.cpu_count()} CPU cores...")
    with ProcessPoolExecutor() as executor:
        futures = [executor.submit(process_mp3_file, file_path) for file_path in mp3_files]
        
        # Process results with progress bar
        for future in tqdm(as_completed(futures), total=total_files, desc="Progress", unit="file"):
            result = future.result()
            if result:
                low_bitrate_folders.add(result)
    
    # Print and save results
    if low_bitrate_folders:
        print("\nFolders containing MP3 files with bitrate < 320 kbps:")
        with open(output_file, 'w', encoding='utf-8') as f:
            f.write("Folders containing MP3 files with bitrate < 320 kbps:\n")
            for folder in sorted(low_bitrate_folders):
                print(f"- {folder}")
                f.write(f"- {folder}\n")
        print(f"\nResults saved to: {output_file}")
    else:
        print("\nNo folders found with MP3 files under 320 kbps.")
        with open(output_file, 'w', encoding='utf-8') as f:
            f.write("No folders found with MP3 files under 320 kbps.\n")
        print(f"\nResults saved to: {output_file}")

def main():
    # Get directory from user
    directory = input("Enter the directory to scan (or press Enter for current directory): ").strip()
    if not directory:
        directory = os.getcwd()
    
    # Verify directory exists
    if not os.path.isdir(directory):
        print(f"Error: '{directory}' is not a valid directory.")
        return
    
    # Get output file path from user
    output_file = input("Enter the output file path (or press Enter for 'low_bitrate_folders.txt'): ").strip()
    if not output_file:
        output_file = 'low_bitrate_folders.txt'
    
    # Ensure output file has a .txt extension
    if not output_file.lower().endswith('.txt'):
        output_file += '.txt'
    
    print(f"\nScanning directory: {directory}")
    scan_mp3_bitrate(directory, output_file)

if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        print("\nScript terminated by user.")
    except Exception as e:
        print(f"An unexpected error occurred: {e}")
Run the script:
python scan_mp3.py
Enter a directory path or press Enter for the current directory
Example Output
Enter the directory to scan (or press Enter for current directory): /music

Scanning directory: /music
Counting MP3 files for progress tracking...

Scanning 15000 MP3 files using 8 CPU cores...
Progress: 100%|██████████████████████████| 15000/15000 [00:50<00:00, 300.00file/s]

Folders containing MP3 files with bitrate < 320 kbps:
- /music/Album1
- /music/Album2/Tracks
Performance Notes
  • Speed: Optimized for large datasets (e.g., 15,000 MP3 files) using multiprocessing, typically completing in 30–60 seconds on a 4–8 core CPU.
  • Initial Counting: Adds a few seconds to count files for an accurate progress bar.
  • Scalability: Handles large directories efficiently but may require tuning (e.g., limiting CPU cores) for low-memory systems or slow disks.
Troubleshooting
  • Slow Performance: If the initial counting or scanning is slow, check disk speed (HDD vs. SSD) or reduce CPU cores:
with ProcessPoolExecutor(max_workers=4) as executor:
  • Memory Issues: Reduce max_workers if memory usage is high.
  • Errors: Check for corrupted MP3 files or permission issues. Share error messages for support.
  • Dependencies: Ensure mutagen and tqdm are installed in the correct environment.
Customization Options
  • Add filters to skip specific folders (e.g., .git).
  • Log bitrates of low-bitrate files.
  • Switch to threading for I/O-bound tasks (e.g., slow external drives).

Step-by-Step Breakdown

Import Required Libraries:
    • os: Provides directory traversal functionality using os.walk() to recursively scan folders.
    • mutagen.mp3.MP3: Reads MP3 file metadata, specifically bitrate.
    • pathlib.Path: Handles file paths in a cross-platform way.
    • tqdm: Displays a progress bar for user feedback during file scanning.
    • multiprocessing and concurrent.futures.ProcessPoolExecutor: Enable parallel processing of MP3 files across CPU cores.
    • as_completed: Processes results as they complete for real-time progress updates.
Define process_mp3_file Function:
  • Purpose: Processes a single MP3 file to check its bitrate.
  • Input: A file path (as a Path object).
  • Process:
    • Loads the MP3 file using MP3(file_path) to access metadata.
    • Checks if the bitrate is less than 320,000 bits/sec (320 kbps).
    • Returns the parent folder path (as a string) if the bitrate is low, or None otherwise.
  • Error Handling: Catches exceptions (e.g., corrupted files) and returns None to avoid interrupting the scan.
Define scan_mp3_bitrate Function:
  • Purpose: Scans the directory for MP3 files and identifies folders with low-bitrate files.
  • Steps:
    • Initial File Counting:
      • Uses os.walk(directory) to recursively traverse the directory.
      • Collects paths of all files with .mp3 extension (case-insensitive) into a list.
      • Prints “Counting MP3 files for progress tracking…” to inform the user.
      • Stores the total count for the progress bar.
    • Check for Empty Directory:
      • If no MP3 files are found, prints “No MP3 files found in the directory.” and exits.
    • Multiprocessing Scan:
      • Initializes a ProcessPoolExecutor to use all available CPU cores.
      • Submits each MP3 file for processing using process_mp3_file.
      • Uses tqdm to display a progress bar, updating as files are processed.
      • Collects parent folder paths for low-bitrate files into a set to avoid duplicates.
    • Output Results:
      • If low-bitrate folders are found, prints them in sorted order.
      • If none are found, prints “No folders found with MP3 files under 320 kbps.”
Define main Function:
    • Purpose: Handles user input and script execution.
    • Steps:
      • Prompts the user to enter a directory path or press Enter to use the current directory (os.getcwd()).
      • Verifies the directory exists using os.path.isdir().
      • If invalid, prints an error and exits.
      • Calls scan_mp3_bitrate(directory) to start the scan.
      • Prints the directory being scanned for clarity.
Main Execution Block:
    • Purpose: Runs the script safely with error handling.
    • Process:
      • Wraps main() in a try-except block.
      • Catches KeyboardInterrupt (Ctrl+C) and prints “Script terminated by user.”
      • Catches unexpected errors and prints them for debugging.

Dig Quick Reference Manual

DIG


NAME

dig – DNS lookup utility

SYNOPSIS

dig [ @server ] [ -b address ] [ -c class ] [ -f filename ] [ -k filename ] [ -p port# ] [ -t type ] [ -x addr ] [ -y name:key ] [ name ] [ type ] [ class ] [ queryopt ]

dig [ -h ]

dig [ global-queryopt ] [ query ] 

DESCRIPTION

dig (domain information groper) is a flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name server(s) that were queried. Most DNS administrators use dig to troubleshoot DNS problems because of its flexibility, ease of use and clarity of output. Other lookup tools tend to have less functionality than dig.

Although dig is normally used with command-line arguments, it also has a batch mode of operation for reading lookup requests from a file. A brief summary of its command-line arguments and options is printed when the -h option is given. Unlike earlier versions, the BIND9 implementation of dig allows multiple lookups to be issued from the command line.

Unless it is told to query a specific name server, dig will try each of the servers listed in /etc/resolv.conf.

When no command line arguments or options are given, will perform an NS query for “.” (the root).

SIMPLE USAGE

A typical invocation of dig looks like:

 

 dig @server name type

where:

server
is the name or IP address of the name server to query. This can be an IPv4 address in dotted-decimal notation or an IPv6 address in colon-delimited notation. When the supplied server argument is a hostname, dig resolves that name before querying that name server. If no server argument is provided, digconsults /etc/resolv.conf and queries the name servers listed there. The reply from the name server that responds is displayed.
name
is the name of the resource record that is to be looked up.
type
indicates what type of query is required — ANY, A, MX, SIG, etc. type can be any valid query type. If no type argument is supplied, dig will perform a lookup for an A record.

OPTIONS

The -b option sets the source IP address of the query to address. This must be a valid address on one of the host’s network interfaces.

The default query class (IN for internet) is overridden by the -c option. class is any valid class, such as HS for Hesiod records or CH for CHAOSNET records.

The -f option makes dig operate in batch mode by reading a list of lookup requests to process from the file filename. The file contains a number of queries, one per line. Each entry in the file should be organised in the same way they would be presented as queries to dig using the command-line interface.

If a non-standard port number is to be queried, the -p option is used. port# is the port number that dig will send its queries instead of the standard DNS port number 53. This option would be used to test a name server that has been configured to listen for queries on a non-standard port number.

The -t option sets the query type to type. It can be any valid query type which is supported in BIND9. The default query type “A”, unless the -x option is supplied to indicate a reverse lookup. A zone transfer can be requested by specifying a type of AXFR. When an incremental zone transfer (IXFR) is required,type is set to ixfr=N. The incremental zone transfer will contain the changes made to the zone since the serial number in the zone’s SOA record was N.

Reverse lookups – mapping addresses to names – are simplified by the -x option. addr is an IPv4 address in dotted-decimal notation, or a colon-delimited IPv6 address. When this option is used, there is no need to provide the nameclass and type arguments. dig automatically performs a lookup for a name like 11.12.13.10.in-addr.arpa and sets the query type and class to PTR and IN respectively. By default, IPv6 addresses are looked up using the IP6.ARPA domain and binary labels as defined in RFC2874. To use the older RFC1886 method using the IP6.INT domain and “nibble” labels, specify the -n (nibble) option.

To sign the DNS queries sent by dig and their responses using transaction signatures (TSIG), specify a TSIG key file using the -k option. You can also specify the TSIG key itself on the command line using the -y option; name is the name of the TSIG key and key is the actual key. The key is a base-64 encoded string, typically generated by dnssec-keygen(8). Caution should be taken when using the -y option on multi-user systems as the key can be visible in the output from ps(1) or in the shell’s history file. When using TSIG authentication with dig, the name server that is queried needs to know the key and algorithm that is being used. In BIND, this is done by providing appropriate key and server statements in named.conf.

QUERY OPTIONS

dig provides a number of query options which affect the way in which lookups are made and the results displayed. Some of these set or reset flag bits in the query header, some determine which sections of the answer get printed, and others determine the timeout and retry strategies.

Each query option is identified by a keyword preceded by a plus sign (+). Some keywords set or reset an option. These may be preceded by the string no to negate the meaning of that keyword. Other keywords assign values to options like the timeout interval. They have the form +keyword=value. The query options are:

+[no]tcp
Use [do not use] TCP when querying name servers. The default behaviour is to use UDP unless an AXFR or IXFR query is requested, in which case a TCP connection is used.
+[no]vc
Use [do not use] TCP when querying name servers. This alternate syntax to +[no]tcp is provided for backwards compatibility. The “vc” stands for “virtual circuit”.
+[no]ignore
Ignore truncation in UDP responses instead of retrying with TCP. By default, TCP retries are performed.
+domain=somename
Set the search list to contain the single domain somename, as if specified in a domain directive in /etc/resolv.conf, and enable search list processing as if the +search option were given.
+[no]search
Use [do not use] the search list defined by the searchlist or domain directive in resolv.conf (if any). The search list is not used by default.
+[no]defname
Deprecated, treated as a synonym for +[no]search
+[no]aaonly
This option does nothing. It is provided for compatibilty with old versions of dig where it set an unimplemented resolver flag.
+[no]adflag
Set [do not set] the AD (authentic data) bit in the query. The AD bit currently has a standard meaning only in responses, not in queries, but the ability to set the bit in the query is provided for completeness.
+[no]cdflag
Set [do not set] the CD (checking disabled) bit in the query. This requests the server to not perform DNSSEC validation of responses.
+[no]recursive
Toggle the setting of the RD (recursion desired) bit in the query. This bit is set by default, which means dig normally sends recursive queries. Recursion is automatically disabled when the +nssearch or +trace query options are used.
+[no]nssearch
When this option is set, dig attempts to find the authoritative name servers for the zone containing the name being looked up and display the SOA record that each name server has for the zone.
+[no]trace
Toggle tracing of the delegation path from the root name servers for the name being looked up. Tracing is disabled by default. When tracing is enabled, dig makes iterative queries to resolve the name being looked up. It will follow referrals from the root servers, showing the answer from each server that was used to resolve the lookup.
+[no]cmd
toggles the printing of the initial comment in the output identifying the version of dig and the query options that have been applied. This comment is printed by default.
+[no]short
Provide a terse answer. The default is to print the answer in a verbose form.
+[no]identify
Show [or do not show] the IP address and port number that supplied the answer when the +short option is enabled. If short form answers are requested, the default is not to show the source address and port number of the server that provided the answer.
+[no]comments
Toggle the display of comment lines in the output. The default is to print comments.
+[no]stats
This query option toggles the printing of statistics: when the query was made, the size of the reply and so on. The default behaviour is to print the query statistics.
+[no]qr
Print [do not print] the query as it is sent. By default, the query is not printed.
+[no]question
Print [do not print] the question section of a query when an answer is returned. The default is to print the question section as a comment.
+[no]answer
Display [do not display] the answer section of a reply. The default is to display it.
+[no]authority
Display [do not display] the authority section of a reply. The default is to display it.
+[no]additional
Display [do not display] the additional section of a reply. The default is to display it.
+[no]all
Set or clear all display flags.
+time=T
Sets the timeout for a query to T seconds. The default time out is 5 seconds. An attempt to set T to less than 1 will result in a query timeout of 1 second being applied.
+tries=T
Sets the number of times to retry UDP queries to server to T instead of the default, 3. If T is less than or equal to zero, the number of retries is silently rounded up to 1.
+ndots=D
Set the number of dots that have to appear in name to D for it to be considered absolute. The default value is that defined using the ndots statement in /etc/resolv.conf, or 1 if no ndots statement is present. Names with fewer dots are interpreted as relative names and will be searched for in the domains listed in the search or domain directive in /etc/resolv.conf.
+bufsize=B
Set the UDP message buffer size advertised using EDNS0 to B bytes. The maximum and minimum sizes of this buffer are 65535 and 0 respectively. Values outside this range are rounded up or down appropriately.
+[no]multiline
Print records like the SOA records in a verbose multi-line format with human-readable comments. The default is to print each record on a single line, to facilitate machine parsing of the dig output.
+[no]fail
Do not try the next server if you receive a SERVFAIL. The default is to not try the next server which is the reverse of normal stub resolver behaviour.
+[no]besteffort
Attempt to display the contents of messages which are malformed. The default is to not display malformed answers.
+[no]dnssec
Requests DNSSEC records be sent by setting the DNSSEC OK bit (DO) in the the OPT record in the additional section of the query.

MULTIPLE QUERIES

The BIND 9 implementation of dig supports specifying multiple queries on the command line (in addition to supporting the -f batch file option). Each of those queries can be supplied with its own set of flags, options and query options.

In this case, each query argument represent an individual query in the command-line syntax described above. Each consists of any of the standard options and flags, the name to be looked up, an optional query type and class and any query options that should be applied to that query.

A global set of query options, which should be applied to all queries, can also be supplied. These global query options must precede the first tuple of name, class, type, options, flags, and query options supplied on the command line. Any global query options (except the +[no]cmd option) can be overridden by a query-specific set of query options. For example:

 

dig +qr www.isc.org any -x 127.0.0.1 isc.org ns +noqr

shows how dig could be used from the command line to make three lookups: an ANY query for www.isc.org, a reverse lookup of 127.0.0.1 and a query for the NS records of isc.org. A global query option of +qr is applied, so that dig shows the initial query it made for each lookup. The final query has a local query option of +noqr which means that dig will not print the initial query when it looks up the NS records for isc.org.

FILES

/etc/resolv.conf

SEE ALSO

host(1), named(8), dnssec-keygen(8), RFC1035.

BUGS

There are probably too many query options.

Simple Raspberry Pi RTSP stream Dashboard

So, you have a Raspberry Pi and want to use it as a dashboard to display a rtsp stream without having to install a full desktop environment.  This is useful if you want to display security camera streams etc without requiring a full desktop environment or window manager.  This way it keeps your solution simple and lightweight.

Parts needed

  • – Raspberry Pi
  • – Screen

Setup your Pi

You can install the raspberry pi OS lite edition.  Once that is up and running you need to update it:

sudo apt update && sudo apt upgrade -y

Next, install the required packages:

sudo apt install -y xserver-xorg ffmpeg

Now reboot:

sudo reboot

Once that is installed, create a service file in /etc/systemd/system/ :

sudo nano /etc/systemd/system/stream.service

and paste the following:

[Unit] 
Description=RTSP Stream to attached Screen with ffplay 
After=multi-user.target rescue.service rescue.target display-manager.service 

[Service]
Type=simple 
User=1000 # User UID to run service under
Group=1000 # Group GID to run service under
ExecStart=/usr/bin/ffplay -autoexit -rtsp_transport tcp -sync video -fflags nobuffer -framedrop  -i rtsp://hotname:port
Environment="XDG_RUNTIME_DIR=/run/user/1000" #change 1000 to the user above
Restart=always 
RestartSec=20 

[Install] 
WantedBy=multi-user.target

Once that file is saved, run the following commands

Sudo systemctl daemon-reload
sudo systemctl enable stream.service
sudo systemctl start stream.service

That’s it.  All going well your RTSP stream should now automatically come up at boot.  Enjoy

Hardware Watchdog on Linux Machines

On any modern Linux operating system that uses systemd you can configure systemd to interact with the hardware watchdog on your behalf, rather than doing it with watchdog service (apt install watchdog) or using a separate user-space daemon.

To enable the builtin systemd hardware watchdog, edit the following file:

 sudo nano /etc/systemd/system.conf

uncomment and set the following values

RuntimeWatchdogSec=10 
RebootWatchdogSec=2min 
WatchdogDevice=/dev/watchdog0

The first entry RuntimeWatchdogSec enables the systemd watchdog, RebootWatchdogSec sets the wait time after boot to start feeding the watchdog and WatchdogDevice is the hardware device to be fed.

Most modern hardware has a watchdog timer including Rapsberry Pis and most Intel chips beyond generation 7.

A bit more on the built-in systemd watchdog..

Systemd’s watchdog can be mainly used for 3 different actions:

  • hardware reset (leveraging the CPU hardware watchdog exposed at /dev/watchdog). This is enabled by the RuntimeWatchdogSec= option in /etc/systemd/system.conf
  • application reset, as long as this is foreseen in the systemd unit definition (see below service example)
  • system reset as a fallback measure in response to multiple unsuccessful application resets. Also defined in the systemd unit

example unit file:

[Unit]
Description=My Little Daemon
Documentation=man:mylittled(8)

[Service]
ExecStart=/usr/bin/mylittled
WatchdogSec=30s
Restart=on-failure
StartLimitInterval=5min
StartLimitBurst=4
StartLimitAction=reboot-force

The example is taken from: http://0pointer.de/blog/projects/watchdog.html, which gives a pretty complete overview of what and how you can use the watchdog service.

 

 

 

 

Boot Raspberry Pi 3B with USB SSD

Ensure USB Boot OTP is set

To enable the USB boot bit, the Raspberry Pi 3 needs to be booted from an SD card with a config option to enable USB boot mode. Once this bit has been set, the SD card is no longer required. Note that any change you make to the OTP is permanent and cannot be undone.

You can use any SD card running Raspbian or Raspbian Lite to program the OTP bit. First, prepare the /boot directory with up to date boot files:-

 sudo apt update && sudo apt upgrade && sudo reboot

Then enable USB boot mode with this code:-

echo program_usb_boot_mode=1 | sudo tee -a /boot/config.txt

This adds program_usb_boot_mode=1 to the end of /boot/config.txt. Reboot the Raspberry Pi with:-

sudo reboot

Then check that the OTP has been programmed with:-

 vcgencmd otp_dump | grep 17:

Check that the output 17:3020000a is shown. If it is not, then the OTP bit has not been successfully programmed. In this case, go through the programming procedure again. If the bit is still not set, this may indicate a fault in the Pi hardware itself.

If you wish, you can remove the ‘program_usb_boot_mode’ line from config.txt, so that if you put the SD card in another Raspberry Pi, it won’t program USB boot mode. Make sure there is no blank line at the end of config.txt. You can edit config.txt using the nano editor using the command:-

sudo nano /boot/config.txt              # then scroll all the way to the bottom

Ensure Pi waits for USB to initialise

There are two different things which go by the same name program_usb_boot_timeout (previously called program_usb_timeout): the OTP bit and the corresponding parameter in config.txt. The latter is used to set the former (by booting from SD card), but once the OTP bit is set, there is no need for the SD card anymore. And just in case it’s not clear, OTP is a kind of flash memory, so its content is persistent across reboots.

So the full procedure goes like this:

  • prepare a bootable SD card and boot from it
  • run sudo BRANCH=next rpi-update
  • add program_usb_boot_timeout=1 to your config.txt
  • reboot (this is the moment OTP bit will be programmed)
  • power off, remove the SD card and plug USB device
  • power on.

Fusermount3 error with rclone

Some distributions do not name the fusermount directory as fusermount3 which rclone needs in certain circumstances (mounting as daemon).  To fix this, create a simlink to the fusermount directory in path

sudo ln -s /bin/fusermount /bin/fusermount3

Thats it, rclone will now mount without a fuse error.  you may need to alter the origin location in your environment to reflect where fusermount lives.