Claude - Mapping SAS controller device to filesystem - 20260422
Let's dig into this. The blank sg_map entry for /dev/sg2 is actually normal/expected behaviour — it just means that device is a non-disk type (like an enclosure services device or expander), so don't worry about that.
Let's walk through getting your SAS disk properly mounted.
1. Confirm the disk is visible and identify it
<code>lsblk -o NAME,SIZE,TYPE,TRAN,MODEL,SERIAL</code>
Also run:
<code>sg_map -x</code>
The -x flag shows the device type — type 0 is a disk, type 13 is an enclosure, etc. The blank against /dev/sg2 is likely one of those non-disk types.
2. Check what the kernel actually sees
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">grep</span> -E <span style="color:#008000">&#39;sd[a-z]|scsi|aacraid&#39;</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -30</code>
The Adaptec ASR-5405Z uses the aacraid driver on Linux, so you should see it referenced. Look for lines attaching sdX devices.
3. Identify your SAS disk's block device
<code>lsblk <span style="color:#0051c2">fdisk</span> -l</code>
You're looking for an unpartitioned or unformatted disk — it'll show as a raw device like /dev/sdb or /dev/sdc with no children.
4. Partition the disk (if needed)
If it's a fresh/bare disk with no partition table:
<code><span style="color:#0051c2">parted</span> /dev/sdX mklabel gpt <span style="color:#0051c2">parted</span> /dev/sdX mkpart primary ext4 <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>
Replace sdX with your actual device. If it already has partitions, skip to step 5.
5. Format the partition
<code>mkfs.ext4 /dev/sdX1</code>
Or xfs if you prefer:
<code>mkfs.xfs /dev/sdX1</code>
6. Mount it
<code><span style="color:#0051c2">mkdir</span> -p /mnt/sasdisk <span style="color:#0051c2">mount</span> /dev/sdX1 /mnt/sasdisk</code>
Verify:
<code><span style="color:#0051c2">df</span> -h /mnt/sasdisk</code>
7. Make it persistent (fstab)
Get the UUID:
<code>blkid /dev/sdX1</code>
Then add to /etc/fstab:
<code>UUID=your-uuid-here /mnt/sasdisk ext4 defaults 0 2</code>
Key question before going further
Run lsblk and sg_map -x and paste the output — that'll confirm exactly which /dev/sdX is your new SAS disk and whether it needs partitioning or already has data on it. That'll make sure we don't touch the wrong device.
Ah, now I see the problem clearly. /dev/sg2 has no corresponding block device (/dev/sdX) — the SCSI generic device exists but the kernel never created a block device for it. This is a known aacraid quirk where the controller sees the disk but the block layer doesn't get properly notified.
Diagnose first
<code><span style="color:#0051c2">cat</span> /proc/scsi/scsi</code>
Look for a Host: scsi0 entry with Channel: 01 Id: 00 — that's your missing disk. Also:
<code>sg_inq /dev/sg2</code>
This will tell you exactly what's on the other end — confirm it's actually a disk (Device_type=0).
Force the block device to materialise
The sg device exists but the block device wasn't created. Try forcing it:
<code><span style="color:#b34a00">echo</span> <span style="color:#008000">&quot;- - -&quot;</span> <span style="color:#14181f">&gt;</span> /sys/class/scsi_host/host0/scan</code>
Then immediately check:
<code>lsblk <span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20</code>
If that doesn't work, target it more precisely using the channel/id/lun from sg_map -x — your /dev/sg2 shows 0 1 0 0 meaning host 0, channel 1, id 0, lun 0:
<code><span style="color:#b34a00">echo</span> <span style="color:#008000">&quot;1 0 0&quot;</span> <span style="color:#14181f">&gt;</span> /sys/class/scsi_host/host0/scan</code>
If still nothing — aacraid rescan
The aacraid driver has its own rescan mechanism separate from the SCSI layer:
<code><span style="color:#6e7687"># Check if the driver exposes a rescan interface</span> <span style="color:#0051c2">ls</span> /sys/bus/pci/drivers/aacraid/</code>
<code><span style="color:#6e7687"># Force PCI device rescan</span> <span style="color:#b34a00">echo</span> <span style="color:#008080">1</span> <span style="color:#14181f">&gt;</span> /sys/bus/pci/devices/0000:02:00.0/rescan</code>
Then check lsblk again.
Nuclear option — modprobe cycle
If the above all fail, the driver may need a full reload to properly register the block device:
<code>modprobe -r aacraid <span style="color:#14181f">&amp;&amp;</span> modprobe aacraid <span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -30 lsblk</code>
The sg_inq /dev/sg2 output is the most useful next step — it'll confirm whether this is genuinely a disk that the block layer has missed, or something else (e.g. a virtual device or BBU presented by the controller). What does it show?
Good call avoiding the nuclear option! The key clue is right there:
PQual=1 = "LU temporarily unavailable"
This means the controller knows the disk is there but is reporting it as not ready — which is why no block device is created. This is almost certainly the Adaptec controller itself not having finished initialising the disk, rather than a kernel/driver issue. Common causes:
Most likely culprits
1. The disk is in a "failed" or "foreign" state in the controller
The ASR-5405Z manages disks at the controller level. If this disk was previously in another array or controller, it may be held in a foreign/failed state. You need to check this via the Adaptec management CLI:
<code><span style="color:#6e7687"># Check if arcconf is installed</span> arcconf list</code>
If not installed:
<code><span style="color:#0051c2">zypper</span> <span style="color:#0051c2">install</span> arcconf</code>
Or grab it from Microchip/Adaptec's site. Then:
<code>arcconf getconfig <span style="color:#008080">1</span> <span style="color:#6e7687"># Full controller config</span> arcconf getconfig <span style="color:#008080">1</span> LD <span style="color:#6e7687"># Logical devices</span> arcconf getconfig <span style="color:#008080">1</span> PD <span style="color:#6e7687"># Physical devices - look for your ST8000NM0075</span></code>
Look for the disk showing as Failed, Foreign, Raw, or Unconfigured.
2. If it shows as unconfigured/raw, create a JBOD/pass-through device
The 5405Z won't expose a disk as a block device unless it's either part of an array or configured as a pass-through/JBOD:
<code>arcconf setconfig <span style="color:#008080">1</span> createlogicaldevice max JBOD <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">1,0</span></code>
The exact channel/device numbers will come from the getconfig 1 PD output.
Is arcconf already on the system? That output from getconfig 1 PD will tell us exactly what state the controller thinks the disk is in, which is the real answer here.
Yes, Microchip absorbed Adaptec and moved everything under their umbrella. Here's where to find it:
Direct download
Go to:
<code>https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-5405z/</code>
Or the more general storage tools page:
<code>https://www.microchip.com/en-us/products/storage/raid-and-host-bus-adapters/software</code>
Look for "Adaptec RAID Controller Administration Utility" — the package you want is called arcconf and it's distributed as part of the "Adaptec Storage Manager" or as a standalone CLI binary.
The direct CLI-only download
The standalone arcconf binary is under:
- OS: Linux
- Package name:
arcconf_linux_x86_64.tgzor similar
It's a standalone binary so no install needed — just:
<code><span style="color:#0051c2">tar</span> xzf arcconf_linux*.tgz <span style="color:#b34a00">cd</span> arcconf* ./arcconf list</code>
If the Microchip site gives you grief
The binary is also mirrored on some repos:
<code><span style="color:#6e7687"># On some systems this works</span> <span style="color:#0051c2">zypper</span> addrepo https://download.opensuse.org/repositories/hardware/openSUSE_Tumbleweed/ hardware <span style="color:#0051c2">zypper</span> refresh <span style="color:#0051c2">zypper</span> <span style="color:#0051c2">install</span> arcconf</code>
Check if it's in the hardware repo first as that saves the Microchip download hassle. What does zypper search arcconf return?
That hanging behaviour is a known issue with newer arcconf versions on some systems — it tries to connect to a management daemon that either isn't running or isn't compatible. Let's fix that.
Check/start the management daemon
arcconf relies on a background service:
<code>systemctl status arcconf systemctl status StorMan</code>
Try starting whichever exists:
<code>systemctl start arcconf <span style="color:#6e7687"># or</span> systemctl start StorMan</code>
Then retry arcconf list.
If no daemon exists, try the direct hardware access flag
Some versions support bypassing the daemon:
<code>arcconf getconfig <span style="color:#008080">1</span> --nologs</code>
Alternative — use sg3_utils directly instead
Since arcconf is being awkward, we can interrogate the controller and disk state directly. You already have sg3_utils working:
<code>sg_readcap /dev/sg2</code>
<code>sg_turs /dev/sg2</code>
sg_turs sends a Test Unit Ready command — the response will tell us exactly why the controller is reporting PQual=1. It'll return one of:
- A sense key like NOT READY with an additional code that pinpoints the cause (e.g. logical unit not configured, becoming ready, format in progress)
That sense key response is really what we need — it'll tell us whether this is a controller configuration issue or something else entirely.
Your suspicion is almost certainly correct. The ASR-5405Z is a hardware RAID controller — it fundamentally wants to manage arrays, not present raw disks. That PQual=1 "temporarily unavailable" is classic behaviour of it trying to initialise or build something in the background, and it messed with your SATA partition tables for exactly that reason — it was incorporating them into its own metadata.
The core problem
The 5405Z has no JBOD/HBA passthrough mode in its firmware. Unlike HBA-class cards, it always sits between the OS and the disks managing them itself. Without arcconf working you have no way to tell it to stop and just present the disk — you're essentially locked out of the controller's brain.
Honest assessment
Given that it:
- Corrupted your SATA partition tables
- Hangs any tool that tries to talk to it
- Has no working management utility on Tumbleweed
- Has no JBOD mode
Returning it is genuinely the right call. It's an enterprise RAID card that really wants to live in a Windows Server or RHEL environment with full Adaptec Storage Manager support.
What you actually want instead
For a Tumbleweed text server where you just want raw SAS disk access, look for an HBA (Host Bus Adapter) not a RAID controller:
- LSI 9207-8i or 9211-8i — these are the classic choices, very well supported by the
mpt3sasdriver which is solid on Tumbleweed, true passthrough, disks appear immediately as/dev/sdX - LSI 9300-8i — slightly newer, same excellent Linux support
- All of these can often be found secondhand for £20-40
That mpt3sas driver just works — no daemons, no management tools needed, disks appear instantly.
Ha, yes the 6Gbps ceiling is the price you pay for going secondhand budget! In reality though for a file server with spinning SAS disks you'll never saturate 6Gbps anyway — a single ST8000NM0075 maxes out around 2.5Gbps sequential so you have headroom to spare.
Older arcconf is worth trying
The key thing to look for is arcconf version 6.x or 7.x — these predate the daemon architecture and ran as a true standalone binary that talks directly to the hardware. If you can find one of those builds it should just work without any service dependency and might be enough to tell the controller to stop trying to build arrays and release that disk.
Look for filenames like:
<code>arcconf_6_xx_xxxxx_linux.tar.gz</code>
rather than the RPM packages which tend to be the newer daemon-based versions.
The 9211-8i at £35 is the sweet spot
Genuinely the community favourite for exactly your use case — mpt3sas driver, zero configuration, disks just appear. Many people also flash them to IT mode firmware which bypasses even the onboard RAID logic entirely, though it likely ships that way already for that price.
Good luck with the Microchip dig tomorrow — if you do find an older arcconf that actually talks to the controller it would be worth knowing what state it reports that ST8000NM0075 in, even if just out of curiosity before you box everything up!
Ha! "Controllers found: 0" — arcconf has basically thrown its hands up and gone home too 😄
That's actually useful information in a backwards way — even the old version that should talk directly to hardware can't see it, which suggests the aacraid driver has the controller locked up in a way that blocks any out-of-band access. A classic catch-22.
Cut your losses, get some sleep, and order the 9211-8i tomorrow. For £35 and plug-and-play Linux support it's going to save you a lot of grief compared to wrestling with this Adaptec any further.
The ST8000NM0075 is a nice drive — 8TB Seagate Exos enterprise SAS, it'll be very happy behind a proper HBA. Worth keeping even if you return everything else.
Night! 🌙
Ha! Even the kernel won't let you rage-quit cleanly — the aacraid module is clinging on for dear life 😄
That's actually the controller's last laugh — it's got its hooks into something and won't let go without a full reboot, which you wisely don't want to risk at this hour on a text-only server.
Leave it all alone overnight, order the 9211-8i in the morning, and everything will look better after some sleep. The server isn't going anywhere!
Night! 🌙
Ha! Autocorrect/fat fingers strikes at bedtime 😄
Night! 🌙
Yes! A single disk RAID 0 (sometimes called a "simple volume") is exactly the trick to get the controller to present it as a block device. Try:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">1</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>
Where the 1 0 0 at the end is channel/id/lun from your sg_map output. But first, paste the output of:
<code>arcconf getconfig <span style="color:#008080">1</span> PD</code>
So we can confirm the exact device numbers for that ST8000NM0075 before we tell it to do anything — don't want it grabbing one of your SATA drives by mistake given its previous partition table adventures! 😄
Perfect — the disk is Ready and fully visible to the controller. Device is at Channel 0, Device 0. No ambiguity at all, and crucially Used Size: 0 MB so it hasn't been touched.
Now create a single-disk RAID 0 to get it presented as a block device:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>
Then watch:
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20 lsblk</code>
You should see a new /dev/sdX appear almost immediately. The controller is in a clean state with that disk so this should be straightforward — no array building or initialisation needed for RAID 0 on a fresh disk.
Fingers crossed for a /dev/sde appearing! 🤞
Ah, the 5405Z is being picky! Try these variants one at a time:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>
(without the r0 — some versions use positional args)
If that fails:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max simple <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>
If that fails, check what RAID levels it actually supports:
<code>arcconf getconfig <span style="color:#008080">1</span> AL</code>
That lists Available Logical device configurations. It'll tell us exactly what this controller is willing to do — some of the 5405Z firmwares are locked to RAID 1/5/6/10 only and won't do RAID 0 at all, which would be a very Adaptec enterprise thing to do! 😄
Getting closer! It wants more disks for whatever it's defaulting to. Try explicitly specifying RAID 0 with the correct syntax:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">0,0</span></code>
Or:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">0,0</span>,0,0</code>
But first, let's see what it actually supports:
<code>arcconf getconfig <span style="color:#008080">1</span> AL</code>
That output is really what we need now — no point guessing syntax if it fundamentally won't do single disk RAID 0. It may only want to create volumes with minimum 2 or 4 disks, which would explain both errors you've hit. 😄
Just the summary of supported RAID levels and minimum disk counts — so the top section showing available types should be enough. Anything that lists RAID levels and their constraints is what we need! 😄
Hmm, no RAID level list there. Try:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive <span style="color:#b34a00">help</span></code>
Or just dump everything:
<code>arcconf getconfig <span style="color:#008080">1</span> AD</code>
That's the Adapter config block and may show supported RAID levels.
Also worth trying — the syntax that older arcconf versions preferred:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> noprompt</code>
And just out of curiosity:
<code>arcconf getconfig <span style="color:#008080">1</span> LD</code>
To confirm there really are zero logical devices currently — because if the controller secretly already created something in the background during all that earlier hanging, that might explain the "incorrect number of disks" error too! 😄
Excellent — the help output is gold! Two important things:
1. JBOD is supported! That's better than RAID 0 for your use case:
<code>arcconf create <span style="color:#008080">1</span> jbod <span style="color:#008080">0</span> <span style="color:#008080">0</span> noprompt</code>
That's channel 0, device 0 — exactly what your PD config showed.
2. If JBOD fails, the correct RAID 0 syntax from the help is:
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span> noprompt</code>
That's max size, RAID level 0, channel 0, device 0.
Try the JBOD one first — it's the cleanest solution, presents the disk with zero controller interference, and is exactly what you want for direct single disk access. If it works you should see /dev/sde appear almost immediately! 🤞
YES! Now check:
<code>lsblk <span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20</code>
You should have a shiny new /dev/sde waiting for you! 🎉
BRILLIANT! 8TB Exos is alive and kicking! 🎉
Yes, straight into partitioning now — it's just a normal disk from here:
<code><span style="color:#0051c2">parted</span> /dev/sde mklabel gpt <span style="color:#0051c2">parted</span> /dev/sde mkpart primary ext4 <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>
Then format:
<code>mkfs.ext4 /dev/sde1</code>
Or if you prefer xfs (better for large files on a file server):
<code>mkfs.xfs /dev/sde1</code>
Then mount:
<code><span style="color:#0051c2">mkdir</span> -p /media3 <span style="color:#0051c2">mount</span> /dev/sde1 /media3</code>
And make it persistent — get the UUID:
<code>blkid /dev/sde1</code>
Add to /etc/fstab:
<code>UUID=your-uuid-here /media3 ext4 defaults 0 2</code>
What filesystem are your other media volumes using? Worth staying consistent if you have a preference. 😄
For recorded TV specifically, XFS has some real advantages:
XFS wins for your use case because
- Large files — TV recordings are typically 2-8GB per episode, XFS was designed for exactly this, large sequential writes and reads
- No slowdown over time — ext4 can get sluggish as a large volume fills up, XFS maintains performance consistently even at 90%+ full
- Faster deletion — clearing out old series to make space is noticeably quicker on XFS
- No
reserved blocks— ext4 reserves 5% for root by default, on an 8TB disk that's 400GB wasted. You can fix that withtune2fsbut XFS doesn't have the problem at all
ext4 wins on
- Shrinking — you can't shrink an XFS volume, only grow it. Probably irrelevant for your use case
- fsck recovery — marginally more mature recovery tools if something goes badly wrong
Verdict
For an 8TB overflow store for large TV recording files, XFS is the better choice. The 400GB saving alone is worth it, and the large sequential file performance is just a better fit.
<code>mkfs.xfs /dev/sde1</code>
And in fstab use xfs instead of ext4 obviously! 😄
The mkpart line just needs the filesystem type changed — parted only uses it as a label hint anyway, the actual filesystem is created by mkfs:
<code><span style="color:#0051c2">parted</span> /dev/sde mklabel gpt <span style="color:#0051c2">parted</span> /dev/sde mkpart primary xfs <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>
Then:
<code>mkfs.xfs /dev/sde1</code>
That's it! 😄
Ha, fair enough — YaST is right there and does the same job with less typing! 😄
It'll handle the UUID and fstab entry automatically too which saves a step. Let us know when it's done and mounted! 🤞
Good thinking on the tidying up. On the arcconf version question — I'd actually keep the working version rather than drop back to V4. Your reasoning is sound that the controller just needed time to initialise, but having a working arcconf that can actually talk to the controller is genuinely valuable insurance for:
- Adding that second SAS drive when it arrives — you now know the JBOD creation command that works
- Any future controller weirdness
- Checking drive health and status
V4 giving "Controllers found: 0" makes it essentially useless, so there's no real benefit in going back to it.
On the 4 slot cage point
That's a nice bonus you'd forgotten about! When the new SAS drive arrives the workflow is now sorted:
- Fit the drive
rescan-scsi-bus.sh- Wait for it to appear in
arcconf getconfig 1 PDas Ready arcconf create 1 jbod 0 X noprompt(where X is its device ID)- Partition and format
You've basically turned a frustrating evening into a documented procedure for the next one! 😄
That's a really astute observation and you're probably right on all counts. The timeline makes more sense that way:
What likely actually happened
- V4 — daemon-based, was patiently waiting for the controller to become ready, hence the hanging
- V3 — direct hardware access, could see the controller wasn't ready and just reported "Controllers found: 0" honestly
- Working version — caught the controller at the right moment after it had finally finished whatever initialisation it was doing in the background
On the poor design point
Absolutely — a well designed controller/driver interface should:
- Report why it's not ready rather than just blocking
- Have a clear "initialising, please wait" status visible to the OS
- Not silently grab disks and mess with partition tables during that phase
- Not need a separate management utility just to present a single disk
The PQual=1 "temporarily unavailable" you saw at the very start was actually the controller trying to tell you this, but it's such an obscure SCSI status code that nothing surfaces it helpfully to the user.
It's classic enterprise hardware designed for environments where an admin has a full Adaptec Storage Manager GUI running on Windows Server — bolted onto Linux as an afterthought. The LSI/HBA approach of just presenting disks directly to the OS is so much more elegant! 😄
That's a great idea and would genuinely help people — this thread covers a really complete real-world journey that documentation never does:
What the crib sheet should capture
- The controller appearing in dmesg/sg_map but no block device being created is normal initial behaviour — just wait
sg_inq /dev/sgXto check PQual status- The arcconf version maze — V3 can't see it, V4 hangs waiting, find the middle ground
- The critical JBOD creation command that actually works
- The
getconfig 1 PDcheck before creating anything to confirm device IDs - The XFS recommendation for large media files and the 400GB saved on 8TB
On the value point
You're absolutely right about the economics. Right now on eBay:
- ASR-5405Z or similar 4-channel SAS card — £15-25
- 8TB Seagate Exos SAS drives — considerably cheaper than equivalent SATA
- SFF-8087 cable — a few pounds
The gotcha that nobody documents is that the card needs arcconf to create a JBOD before Linux sees anything — which is exactly what your crib sheet would solve for the next person who hits this wall! 😄
Is the format still churning away?
For XFS on 8TB it should actually be very fast — XFS format is nearly instantaneous as it doesn't pre-write anything, usually under 30 seconds even on a large disk.
The fact it's still running suggests YaST may be doing something more than just mkfs.xfs — it possibly selected:
- ext4 with full initialisation
- A slow/thorough format option
- Or it's doing a partition check/wipe first
Check what's actually happening
In another terminal:
<code><span style="color:#0051c2">ps</span> aux <span style="color:#14181f">|</span> <span style="color:#0051c2">grep</span> -E <span style="color:#008000">&#39;mkfs|mke2fs|format|yast&#39;</span></code>
And:
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -10</code>
That'll tell you what process is actually running and whether it's still active or has silently finished. YaST sometimes completes the underlying operation and just sits on the progress screen without being obvious about it! 😄
How long has it been running now?
Ha! 47 minutes staring at a screen waiting for something that hadn't actually started yet! 😄
Classic YaST — it was just sitting there politely waiting for you to tell it to actually go. At least the XFS format itself will be done in seconds and you'll have your 8TB mounted before the kettle boils!
That's XFS metadata and the journal — completely normal. On 8TB that 2% is roughly 160GB which sounds alarming but it's just how XFS structures itself for a volume that size. You're not losing anything usable in practice.
Run:
<code><span style="color:#0051c2">df</span> -h /media3</code>
That'll show you the actual available space in human readable form — you should see around 7.2-7.3TB free. 😄
Ha yes, the eternal "8TB" deception! 😄
Where the space goes
- Manufacturers count in decimal — 8TB = 8,000,000,000,000 bytes
- Linux counts in binary — so it shows 7.3TiB (tebibytes)
- That gap is ~700GB just from the decimal/binary difference, completely normal
- Then XFS metadata/journal takes the 143GB on top
So you're not actually losing anything — it's just marketing maths vs real maths. Every "8TB" disk formats to ~7.3TB on Linux.
Net result
You have roughly 7.1TB of usable space for your TV recordings which at typical broadcast quality is probably:
- 2,000+ hours of HD recordings
- Or several complete series of everything you'd ever want to keep 😄
Ready to write that crib sheet?
Perfect plan! 2TB copy will take a while but it'll chug away happily in the background. Kick it off with:
<code><span style="color:#0051c2">cp</span> -av /media2/TVShows /media3/ <span style="color:#14181f">&amp;</span></code>
The -v gives you a running log of what's being copied, and & puts it in the background so you can log out safely. Or if you want to be able to check progress later:
<code><span style="color:#0051c2">nohup</span> <span style="color:#0051c2">rsync</span> -av --progress /media2/TVShows /media3/ <span style="color:#14181f">&amp;</span></code>
rsync is better for large copies — if anything interrupts it, it can resume where it left off rather than starting over.
Check progress anytime with:
<code><span style="color:#0051c2">watch</span> -n <span style="color:#008080">60</span> <span style="color:#0051c2">df</span> -h /media3</code>
Enjoy physio, good luck with the crib sheet, and nice work getting there in the end — Llama 4 would still be scratching its head! 😄
Ha of course, /Archive! And yes just leave the ssh terminal open and let it run — srv9 will quietly get on with it while you're at physio. 😄
Enjoy the session!
