How to Clean System Data on macOS and Free Up Disk Space (Complete 2026 Guide)

Is your Mac showing 100GB+ of System Data and you have no idea what is consuming the space?

Recently, my Mac was showing 120.44GB of System Data. After a structured and safe cleanup process (without using any third-party cleaner tools), I successfully freed up 40GB+ disk space.

This complete guide will help you clean up System Data on macOS safely — especially if you are a developer using Docker, VS Code, Node, Composer, or similar tools.


What is System Data on macOS?

System Data is Apple’s grouped storage category. It may include:

  • Time Machine local snapshots
  • System cache files
  • Log files
  • Docker virtual machine disk images
  • Developer build artifacts
  • iPhone/iPad backups
  • Virtual memory swap files
  • macOS update leftovers

macOS hides these files under one label, which makes it difficult to identify the real storage usage.


Step-by-Step Guide to Clean System Data on macOS

Follow these steps carefully. These methods are safe and will not harm your applications or personal files.


Step 1: Restart Your Mac

Before starting cleanup:

  1. Restart your Mac
  2. Go to System Settings → General → Storage
  3. Check if System Data recalculates

Sometimes macOS misreports usage and corrects itself after reboot.


Step 2: Remove Time Machine Local Snapshots

If Time Machine is enabled and your external backup disk is not connected, macOS creates local snapshots on your internal disk.

Check local snapshots

Open Terminal and run:

tmutil listlocalsnapshots /

If multiple snapshots appear, they are consuming disk space.

Remove snapshots safely

sudo tmutil thinlocalsnapshots / 9999999999 4

This will not delete your external backups.

This step alone can free 20GB–60GB.


Step 3: Clean Docker Data

Even if Docker app size shows 1–2GB, its internal VM disk image may consume much more.

Ensure Docker is running

docker version

If you see an error like:

“Cannot connect to the Docker daemon…”

Open Docker Desktop from Applications and wait until it fully starts.

Check Docker disk usage

docker system df

Remove unused Docker data

docker system prune -a
docker volume prune
docker builder prune

If Docker is corrupted or not starting, check its storage directory:

du -sh ~/Library/Containers/com.docker.docker

If very large and you do not need old containers, quit Docker and remove:

rm -rf ~/Library/Containers/com.docker.docker

Restart Docker Desktop after deletion.



Step 4: Clean VS Code Caches

VS Code itself is small, but caches and workspace storage grow large.

Check usage:

du -sh ~/Library/Application Support/Code

You can safely remove:

~/Library/Application Support/Code/Cache
~/Library/Application Support/Code/CachedData
~/Library/Application Support/Code/User/workspaceStorage

Do not delete the entire Code folder if you still use VS Code.



Step 5: Clear Node, NPM, and Composer Cache

NPM Cache

du -sh ~/.npm
npm cache clean --force

Yarn Cache (if used)

du -sh ~/.yarn

Composer Cache

du -sh ~/.composer
composer clear-cache

These steps can free 5GB–15GB on development machines.



Step 6: Remove System Caches

Delete contents (not folders) of:

~/Library/Caches
/Library/Caches

Restart your Mac after cleaning.

macOS will automatically recreate necessary cache files.


Step 7: Delete Old iPhone or iPad Backups

Check for device backups:

~/Library/Application Support/MobileSync/Backup

Delete backups you no longer need.


Step 8: Virtual Memory Files (Do Not Delete Manually)

Located at:

/private/var/vm

Do not manually delete files in this folder.

Restart your Mac to reduce swap file size.


Step 9: Identify Largest Folders on Your Mac

To identify actual disk usage:

du -h ~ -d 1 | sort -hr | head -20

If Library appears large, drill deeper:

du -h ~/Library -d 1 | sort -hr | head -20

This will reveal the real space-consuming directories.


What You Should NOT Delete

Never manually delete:

/System
/usr
/bin
/sbin

Avoid random Mac cleaner applications. Everything in this guide uses built-in macOS tools.


Final Results After Cleanup

After following this process:

  • System Data reduced significantly
  • Recovered 40GB+ disk space
  • No applications were harmed
  • No data was lost

Frequently Asked Questions (FAQs)

Why is System Data so large on macOS?

System Data increases due to Time Machine local snapshots, cache files, Docker images, logs, and developer build artifacts. macOS groups these under one storage category instead of displaying them individually.

Is it safe to delete System Data on Mac?

You should never manually delete protected system folders like /System or /usr. However, removing local snapshots, clearing caches, pruning Docker volumes, and cleaning developer artifacts is safe when done properly.

How much storage can I recover by cleaning System Data?

Most developers can recover between 20GB to 80GB depending on accumulated system data. In real-world usage, freeing 40GB or more is common.

Do I need third-party cleaner apps?

No. You can safely clean System Data using built-in macOS tools and official terminal commands without installing third-party cleaner applications.


Final Thoughts

System Data on macOS is not mysterious — it is accumulated temporary files, snapshots, developer caches, and system artifacts grouped under one label.

By following this complete guide, you can safely clean System Data on macOS and free up storage space without using third-party tools.

For developers, regular maintenance every few months will prevent storage from growing out of control again.


How to Install and Secure Ollama on VPS (Ubuntu + Apache) – Complete Step-by-Step Guide

If you want to run Large Language Models (LLMs) like Mistral, Phi, Llama 3, Gemma, DeepSeek or GPT-OSS on your own VPS, Ollama is one of the easiest and most developer-friendly solutions available today.

In this complete guide, I will show you how to:

  • Install Ollama on Ubuntu VPS
  • Run it safely alongside Laravel, MySQL, Apache, and N8N
  • Connect it to a subdomain
  • Secure it using Let’s Encrypt SSL
  • Authenticate requests
  • Fix common errors (including OOM killer issue)
  • Choose the right LLM for an 8GB VPS
  • Integrate Ollama with N8N workflows

This guide is production-focused and tested on a Contabo Ubuntu server running Apache, Docker, MySQL, Composer, PHP, and self-hosted N8N.


Server Configuration Used

  • Ubuntu VPS (Contabo)
  • 8GB RAM
  • 3 CPU cores
  • Apache
  • MySQL
  • Docker
  • 2 Laravel applications
  • 1 self-hosted N8N instance

If your configuration is similar, this guide will work perfectly.


Step 1: Can We Run GPT-OSS-20B or 120B on 8GB VPS?

Before installation, let’s talk about physics.

❌ GPT-OSS-120B

Requires 60GB+ RAM (even quantized). Not possible.

❌ GPT-OSS-20B

Needs at least 12–16GB RAM. Will crash on 8GB.

✅ Recommended for 8GB VPS

You should use:

  • Mistral 7B (Q4) – Balanced
  • Llama 3 8B (tight but possible)
  • Gemma 7B
  • Phi (Best lightweight choice)

If your VPS also runs Laravel + MySQL + N8N, the safest choice is:

👉 Phi


Step 2: Install Ollama Using Docker (Recommended)

We isolate Ollama so it does not affect running applications.

docker run -d 
  --name ollama 
  -p 127.0.0.1:11434:11434 
  -e OLLAMA_NUM_THREADS=3 
  -e OLLAMA_MAX_LOADED_MODELS=1 
  -v ollama:/root/.ollama 
  --restart unless-stopped 
  ollama/ollama

Important:

  • Bound to 127.0.0.1 (not public)
  • Limited to single model
  • Uses 3 CPU threads

Step 3: Pull Model

For lightweight production setup:

docker exec -it ollama ollama pull phi

Check installed models:

docker exec -it ollama ollama list

Step 4: Test Ollama Locally

curl -X POST http://localhost:11434/api/generate 
  -H "Content-Type: application/json" 
  -d '{
    "model": "phi",
    "prompt": "Explain SSL simply",
    "stream": false
  }'

If you do not add the Content-Type header, the request may hang without error.


Common Error 1: llama runner process terminated: signal: killed

Error:

{"error":"llama runner process has terminated: signal: killed"}

Cause: Linux OOM Killer (Out Of Memory).

Check using:

dmesg | grep -i kill

Solution:

  • Use smaller model (Phi)
  • Add swap
  • Avoid 7B models on shared 8GB production

Add swap (optional):

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab

Step 5: Create Subdomain for Ollama

Add A record:

ollama.yourdomain.com → VPS IP

Step 6: Configure Apache Reverse Proxy

Enable modules:

a2enmod proxy
a2enmod proxy_http
a2enmod headers
systemctl restart apache2

Create /etc/apache2/sites-available/ollama.conf

<VirtualHost *:80>
    ServerName ollama.yourdomain.com
    RewriteEngine On
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

SSL virtual host:

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerName ollama.yourdomain.com

    ProxyPreserveHost On
    ProxyPass / http://127.0.0.1:11434/
    ProxyPassReverse / http://127.0.0.1:11434/

    RequestHeader set X-Forwarded-Proto "https"

    <Location />
        AuthType Basic
        AuthName "Restricted Ollama"
        AuthUserFile /etc/apache2/.ollama_htpasswd
        Require valid-user
    </Location>

    SSLCertificateFile /etc/letsencrypt/live/ollama.yourdomain.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/ollama.yourdomain.com/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf

</VirtualHost>
</IfModule>

Step 7: Install SSL (Let’s Encrypt)

apt install certbot python3-certbot-apache -y
certbot --apache -d ollama.yourdomain.com

Test:

curl -u username:password https://ollama.yourdomain.com/api/tags

Step 8: Connect Ollama with N8N

In N8N:

  1. Add HTTP Request Node
  2. Method: POST
  3. URL: https://ollama.yourdomain.com/api/generate
  4. Authentication: Basic Auth
  5. Headers: Content-Type: application/json

Body:

{
  "model": "phi",
  "prompt": "Summarize {{$json["content"]}}",
  "stream": false
}

Access response using:

{{$json["response"]}}

Monitoring RAM Usage on VPS

Check memory:

free -m

Live monitoring:

apt install htop
htop

Docker container usage:

docker stats

Top memory processes:

ps aux --sort=-%mem | head -15

Removing Large Models (If Needed)

docker exec -it ollama ollama rm mistral

Restart container:

docker restart ollama

Performance Expectations (8GB VPS)

Phi:

  • 1.5–2GB RAM usage
  • Fast responses
  • Stable for production

Mistral 7B:

  • 4–5GB RAM
  • Risky on shared server

Production Recommendations

If AI becomes core to your business:

  • Move Ollama to separate VPS (16GB+)
  • Or use GPU server
  • Or hybrid model (local + cloud)

Never run 20B+ models on 8GB shared production server.


How to Completely Uninstall Ollama from VPS

If you want to remove Ollama completely from your server, follow the steps below carefully. This process removes the Docker container, models, Apache configuration, SSL certificate, and authentication setup.

Step 1: Stop the Ollama Container

docker stop ollama

Step 2: Remove the Container

docker rm ollama

Step 3: Remove Ollama Docker Volume (Deletes All Models)

docker volume rm ollama

This step frees disk space used by downloaded models such as Phi or Mistral.

Step 4: Remove Ollama Docker Image (Optional)

docker rmi ollama/ollama

Step 5: Disable Apache Virtual Hosts

a2dissite ollama.conf
a2dissite ollama-le-ssl.conf
systemctl reload apache2

Step 6: Delete SSL Certificate (Optional)

certbot delete --cert-name ollama.yourdomain.com

Step 7: Remove Authentication File

rm /etc/apache2/.ollama_htpasswd

Step 8: Verify Ollama is Fully Removed

ss -tulnp | grep 11434

If nothing is returned, Ollama has been completely removed from your VPS.



Frequently Asked Questions (FAQs)

1. Can I run GPT-OSS-20B on an 8GB VPS?

No. GPT-OSS-20B requires at least 12–16GB RAM. On an 8GB VPS, the Linux OOM killer will terminate the process.

2. Which LLM is best for an 8GB VPS?

Phi is the safest and most stable model for shared production servers running Apache, Laravel, MySQL, or N8N.

3. Why does my curl request return no output?

This usually happens because the Content-Type: application/json header is missing in the request.

4. What does “signal: killed” error mean?

This error means the Linux Out Of Memory (OOM) killer terminated the Ollama process due to insufficient RAM.

5. Should I expose Ollama directly to the public internet?

No. Always bind Ollama to 127.0.0.1 and use Apache reverse proxy with SSL and authentication for security.

6. Can I run Ollama on my local machine?

Yes. You can install Ollama using the official installer or Docker and access it via http://localhost:11434.


Final Thoughts

Ollama is powerful and easy to deploy, but hardware limits matter. If configured properly with Docker isolation, reverse proxy, SSL, and authentication, you can safely run LLMs on your VPS without affecting Laravel or N8N applications.

Always monitor RAM usage and choose models wisely.

If you need help setting up Ollama, N8N automation, or secure VPS architecture, feel free to contact me.

Happy Building 🚀

How to Easily Install Elasticsearch on Mac: A 2026 Guide

Understanding the Prerequisites

Before you jump into the installation, let’s quickly cover the prerequisites. The good news is that they are minimal, making it very accessible for developers. To install Elasticsearch on Mac, you primarily need a supported version of macOS. While it’s intended for development and not production, it works flawlessly for local projects. In terms of hardware, ensure you have at least 2 GB of available RAM, though 4 GB is ideal if you plan to run Kibana or other parts of the Elastic Stack alongside it. One of the biggest conveniences is that you no longer need to install Java separately. The Elasticsearch archive comes with its own bundled OpenJDK, simplifying the setup process significantly.

How to Install Elasticsearch on Mac: The Core Steps

With the prerequisites out of the way, you can now proceed with the core installation. Following these steps carefully will ensure a smooth setup. This process involves downloading the official archive, verifying its integrity, extracting it, and handling a specific macOS security feature.

  1. Download the Elasticsearch Archive: Navigate to the official Elasticsearch downloads page. From there, select the `tar.gz` package for macOS (often labeled as Darwin). For instance, you would download a file named something like elasticsearch-8.18.4-darwin-x86_64.tar.gz.
  2. Verify the Checksum (Optional but Recommended): To ensure your downloaded file is not corrupted or tampered with, it’s a good security practice to verify its SHA512 checksum. You can do this by downloading the corresponding .sha512 file and running the shasum -a 512 -c <checksum-file> command in your terminal. You should see an “OK” confirmation.
  3. Extract the Archive: Once downloaded, open your terminal and use the command tar -xzf elasticsearch-8.18.4-darwin-x86_64.tar.gz to extract the contents. This will create a new directory (e.g., elasticsearch-8.18.4/), which is now referred to as your $ES_HOME.
  4. Bypass macOS Gatekeeper: This is a crucial step for a successful installation on macOS. Gatekeeper, the built-in security feature, may block the bundled JDK from running because it’s from an unidentified developer. To prevent this, run the command xattr -d -r com.apple.quarantine $ES_HOME/. This removes the quarantine attribute and allows Elasticsearch to execute without security interruptions.


Starting and Validating Your Elasticsearch Node

After you install Elasticsearch on Mac and configure it, the next step is to start the service and confirm it’s running correctly. By default, Elasticsearch runs in the foreground, making it easy to monitor its output. Navigate into your $ES_HOME directory and execute ./bin/elasticsearch. The first time you run this, Elasticsearch will perform a one-time security initialization. It will generate TLS certificates for network encryption, create a password for the `elastic` superuser, and produce an enrollment token for connecting Kibana. Be sure to copy the password and token from the terminal output. To validate the installation, open a new terminal window and run curl -u elastic:<your-generated-password> http://localhost:9200. If you receive a JSON response with cluster details, your node is running successfully. To stop the foreground process, simply press `Ctrl + C`. If you prefer to run it as a background daemon, use the command ./bin/elasticsearch -d -p pid, which will also create a file named `pid` containing the process ID for later management.

Optional: Integrating Kibana for a Visual Interface

While Elasticsearch is a powerful search engine, its capabilities are often best explored through its visualization tool, Kibana. Integrating Kibana into your local setup is highly recommended and straightforward thanks to the enrollment token generated during the initial Elasticsearch startup. The process is very similar to how you installed Elasticsearch. First, download and extract the Kibana `tar.gz` archive for macOS. Just like with Elasticsearch, you may need to run the `xattr` command to remove the quarantine flag if macOS blocks the application. Once extracted, simply run ./bin/kibana. The terminal will provide an enrollment link. Open this link in your browser, and it will prompt you to connect to your local Elasticsearch instance. Use the `elastic` username and the auto-generated password from the previous step to log in. This seamless process connects your Kibana dashboard to your Elasticsearch node, giving you a powerful UI to manage data, create visualizations, and interact with your search indices.

FAQs

Do I need to install Java to run Elasticsearch on Mac?

No, you do not need to install Java separately. The modern Elasticsearch tar.gz archive conveniently includes its own bundled OpenJDK, which is a key reason why it’s so straightforward to install Elasticsearch on Mac for development purposes.

How do I find the ‘elastic’ user password after the installation?

When you first install Elasticsearch on Mac and run it, the generated password for the `elastic` superuser is printed directly to the terminal console where you started the service. Make sure to copy and save this password securely.

What is the ‘xattr’ command for when installing Elasticsearch?

The `xattr -d com.apple.quarantine` command is a crucial step to tell macOS Gatekeeper to trust the application binaries. Without it, macOS security features may block the bundled JDK from running, preventing Elasticsearch from starting up correctly after you install Elasticsearch on Mac.

Can I use this method to install Elasticsearch on Mac for a production server?

No, this installation method is officially recommended for development and testing purposes only. A production environment requires a more robust setup, including considerations for high availability, security hardening, and performance tuning that go beyond this simple guide to install Elasticsearch on Mac.

How do I stop the Elasticsearch process?

To stop Elasticsearch when it is running in the foreground (the default startup method), simply press `Ctrl + C` in the same terminal window where the service is active. This will safely shut down the Elasticsearch node.

Conclusion/Recomendation

Congratulations! You have successfully managed to install Elasticsearch on Mac using the tarball archive. This single-node setup is perfect for local development, allowing you to test queries, build search functionality, and explore the powerful features of the Elastic Stack right from your macOS machine. With a running Elasticsearch node and an optional Kibana interface, you are now well-equipped to dive deeper into data indexing and analysis. For your next steps, consider creating your first index and adding some documents to see it in action.