Claude - Mapping SAS controller device to filesystem - 20260422

Created by: Lester Caine, Last modification: 22 hours 16 minutes ago

Will need to tidy this in time, but at elast it's saved!

Llama 4 has been making a pigs ear of help on this problem, so I've ditched it and switched to here ;) Text based server running tumbleweed. I am trying to see if the Adaptec ASR-5405Z 512MB SAS controller I have just fitted is being seen and I have established that it is and rescan-scsi-bus.sh can see all the /dev/sdX devices, but sg_map has a blank entry against /dev/sg2 and so I need help to hook it up so I can mount the SAS disk to the file system?

19:51
 
 
 
 

Let's dig into this. The blank sg_map entry for /dev/sg2 is actually normal/expected behaviour — it just means that device is a non-disk type (like an enclosure services device or expander), so don't worry about that.

Let's walk through getting your SAS disk properly mounted.

1. Confirm the disk is visible and identify it

 
bash
<code>lsblk -o NAME,SIZE,TYPE,TRAN,MODEL,SERIAL</code>

Also run:

 
bash
<code>sg_map -x</code>

The -x flag shows the device type — type 0 is a disk, type 13 is an enclosure, etc. The blank against /dev/sg2 is likely one of those non-disk types.


2. Check what the kernel actually sees

 
bash
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">grep</span> -E <span style="color:#008000">'sd[a-z]|scsi|aacraid'</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -30</code>

The Adaptec ASR-5405Z uses the aacraid driver on Linux, so you should see it referenced. Look for lines attaching sdX devices.


3. Identify your SAS disk's block device

 
bash
<code>lsblk
<span style="color:#0051c2">fdisk</span> -l</code>

You're looking for an unpartitioned or unformatted disk — it'll show as a raw device like /dev/sdb or /dev/sdc with no children.


4. Partition the disk (if needed)

If it's a fresh/bare disk with no partition table:

 
bash
<code><span style="color:#0051c2">parted</span> /dev/sdX mklabel gpt
<span style="color:#0051c2">parted</span> /dev/sdX mkpart primary ext4 <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>

Replace sdX with your actual device. If it already has partitions, skip to step 5.


5. Format the partition

 
bash
<code>mkfs.ext4 /dev/sdX1</code>

Or xfs if you prefer:

 
bash
<code>mkfs.xfs /dev/sdX1</code>

6. Mount it

 
bash
<code><span style="color:#0051c2">mkdir</span> -p /mnt/sasdisk
<span style="color:#0051c2">mount</span> /dev/sdX1 /mnt/sasdisk</code>

Verify:

 
bash
<code><span style="color:#0051c2">df</span> -h /mnt/sasdisk</code>

7. Make it persistent (fstab)

Get the UUID:

 
bash
<code>blkid /dev/sdX1</code>

Then add to /etc/fstab:

 
<code>UUID=your-uuid-here  /mnt/sasdisk  ext4  defaults  0  2</code>

Key question before going further

Run lsblk and sg_map -x and paste the output — that'll confirm exactly which /dev/sdX is your new SAS disk and whether it needs partitioning or already has data on it. That'll make sure we don't touch the wrong device.

 
 
 
 

lsblk only lists the disks that have entries in sg_map and -X gives srv9:/media2 # sg_map -x /dev/sg0 2 0 0 0 0 /dev/sda /dev/sg1 2 0 1 0 5 /dev/sr0 /dev/sg2 0 1 0 0 0 /dev/sg3 3 0 0 0 0 /dev/sdb /dev/sg4 5 0 0 0 0 /dev/sdc /dev/sg5 6 0 0 0 0 /dev/sdd Step 2 shows the 4 other disk and the DVD drive mounting, but srv9:/media2 # dmesg | grep -E 'aacraid' | tail -30 [ 2.018564] [ T418] Adaptec aacraid driver 1.2.1[50983]-custom [ 2.046734] [ T9] aacraid 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 2.048446] [ T9] aacraid: Comm Interface enabled [ 2.272724] [ T9] aacraid 0000:02:00.0: 64 Bit DAC enabled [ 2.307024] [ T9] scsi host0: aacraid Step 3 fails because /dev/sg2 is not seen as a block device, it's THAT which I need to fix as none of the rest will do anything until there is a /dev/sde entry in sg_map

 
20:03
 
 
 
 

Ah, now I see the problem clearly. /dev/sg2 has no corresponding block device (/dev/sdX) — the SCSI generic device exists but the kernel never created a block device for it. This is a known aacraid quirk where the controller sees the disk but the block layer doesn't get properly notified.

Diagnose first

 
bash
<code><span style="color:#0051c2">cat</span> /proc/scsi/scsi</code>

Look for a Host: scsi0 entry with Channel: 01 Id: 00 — that's your missing disk. Also:

 
bash
<code>sg_inq /dev/sg2</code>

This will tell you exactly what's on the other end — confirm it's actually a disk (Device_type=0).


Force the block device to materialise

The sg device exists but the block device wasn't created. Try forcing it:

 
bash
<code><span style="color:#b34a00">echo</span> <span style="color:#008000">"- - -"</span> <span style="color:#14181f">></span> /sys/class/scsi_host/host0/scan</code>

Then immediately check:

 
bash
<code>lsblk
<span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20</code>

If that doesn't work, target it more precisely using the channel/id/lun from sg_map -x — your /dev/sg2 shows 0 1 0 0 meaning host 0, channel 1, id 0, lun 0:

 
bash
<code><span style="color:#b34a00">echo</span> <span style="color:#008000">"1 0 0"</span> <span style="color:#14181f">></span> /sys/class/scsi_host/host0/scan</code>

If still nothing — aacraid rescan

The aacraid driver has its own rescan mechanism separate from the SCSI layer:

 
bash
<code><span style="color:#6e7687"># Check if the driver exposes a rescan interface</span>
<span style="color:#0051c2">ls</span> /sys/bus/pci/drivers/aacraid/</code>
 
bash
<code><span style="color:#6e7687"># Force PCI device rescan</span>
<span style="color:#b34a00">echo</span> <span style="color:#008080">1</span> <span style="color:#14181f">></span> /sys/bus/pci/devices/0000:02:00.0/rescan</code>

Then check lsblk again.


Nuclear option — modprobe cycle

If the above all fail, the driver may need a full reload to properly register the block device:

 
bash
<code>modprobe -r aacraid <span style="color:#14181f">&&</span> modprobe aacraid
<span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -30
lsblk</code>

The sg_inq /dev/sg2 output is the most useful next step — it'll confirm whether this is genuinely a disk that the block layer has missed, or something else (e.g. a virtual device or BBU presented by the controller). What does it show?

 
 
 
 

Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: LENOVO-X Model: ST8000NM0075 Rev: LC77 Type: Direct-Access ANSI SCSI revision: 06 and srv9:/media2 # sg_inq /dev/sg2 standard INQUIRY: [PQ indicates LU temporarily unavailable] PQual=1 PDT=0 RMB=0 LU_CONG=0 hot_pluggable=0 version=0x06 [SPC-4] [AERC=0] [TrmTsk=0] NormACA=0 HiSUP=1 Resp_data_format=2 SCCS=0 ACC=0 TPGS=0 3PC=0 Protect=1 [BQue=0] EncServ=0 MultiP=1 (VS=0) [MChngr=0] [ACKREQQ=0] Addr16=0 [RelAdr=0] WBus16=0 Sync=0 [Linked=0] [TranDis=0] CmdQue=1 [SPI: Clocking=0x0 QAS=0 IUS=0] length=96 (0x60) Peripheral device type: disk Vendor identification: LENOVO-X Product identification: ST8000NM0075 Product revision level: LC77 Unit serial number: ZA19KPP00000R817QGA2 and I am now at the nuclear option which needed a power reset last time to regain control ;)

 
20:10
 
 
 
 

Good call avoiding the nuclear option! The key clue is right there:

PQual=1 = "LU temporarily unavailable"

This means the controller knows the disk is there but is reporting it as not ready — which is why no block device is created. This is almost certainly the Adaptec controller itself not having finished initialising the disk, rather than a kernel/driver issue. Common causes:

Most likely culprits

1. The disk is in a "failed" or "foreign" state in the controller

The ASR-5405Z manages disks at the controller level. If this disk was previously in another array or controller, it may be held in a foreign/failed state. You need to check this via the Adaptec management CLI:

 
bash
<code><span style="color:#6e7687"># Check if arcconf is installed</span>
arcconf list</code>

If not installed:

 
bash
<code><span style="color:#0051c2">zypper</span> <span style="color:#0051c2">install</span> arcconf</code>

Or grab it from Microchip/Adaptec's site. Then:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span>          <span style="color:#6e7687"># Full controller config</span>
arcconf getconfig <span style="color:#008080">1</span> LD       <span style="color:#6e7687"># Logical devices</span>
arcconf getconfig <span style="color:#008080">1</span> PD       <span style="color:#6e7687"># Physical devices - look for your ST8000NM0075</span></code>

Look for the disk showing as Failed, Foreign, Raw, or Unconfigured.


2. If it shows as unconfigured/raw, create a JBOD/pass-through device

The 5405Z won't expose a disk as a block device unless it's either part of an array or configured as a pass-through/JBOD:

 
bash
<code>arcconf setconfig <span style="color:#008080">1</span> createlogicaldevice max JBOD <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">1,0</span></code>

The exact channel/device numbers will come from the getconfig 1 PD output.


Is arcconf already on the system? That output from getconfig 1 PD will tell us exactly what state the controller thinks the disk is in, which is the real answer here.

 
 
 
 

suse does not have access to arcconf and I'd seen Microchip in searches but did not go down that path. Any pointers on where to find it?

20:12
 
 
 
 

Yes, Microchip absorbed Adaptec and moved everything under their umbrella. Here's where to find it:

Direct download

Go to:

 
<code>https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-5405z/</code>

Or the more general storage tools page:

 
<code>https://www.microchip.com/en-us/products/storage/raid-and-host-bus-adapters/software</code>

Look for "Adaptec RAID Controller Administration Utility" — the package you want is called arcconf and it's distributed as part of the "Adaptec Storage Manager" or as a standalone CLI binary.

The direct CLI-only download

The standalone arcconf binary is under:

  • OS: Linux
  • Package name: arcconf_linux_x86_64.tgz or similar

It's a standalone binary so no install needed — just:

 
bash
<code><span style="color:#0051c2">tar</span> xzf arcconf_linux*.tgz
<span style="color:#b34a00">cd</span> arcconf*
./arcconf list</code>

If the Microchip site gives you grief

The binary is also mirrored on some repos:

 
bash
<code><span style="color:#6e7687"># On some systems this works</span>
<span style="color:#0051c2">zypper</span> addrepo https://download.opensuse.org/repositories/hardware/openSUSE_Tumbleweed/ hardware
<span style="color:#0051c2">zypper</span> refresh
<span style="color:#0051c2">zypper</span> <span style="color:#0051c2">install</span> arcconf</code>

Check if it's in the hardware repo first as that saves the Microchip download hassle. What does zypper search arcconf return?

 
 
 
 

arcconf is not in the hardware repository only adaptec-firmware and a couple of plugins for other management tools. None of your pointer to microchip are still active, but have found Arcconf-4.17-26540.x86_64.rpm however while it installed without an error, it just hangs even with arcconf --help :(

20:33
 
 
 
 

That hanging behaviour is a known issue with newer arcconf versions on some systems — it tries to connect to a management daemon that either isn't running or isn't compatible. Let's fix that.

Check/start the management daemon

arcconf relies on a background service:

 
bash
<code>systemctl status arcconf
systemctl status StorMan</code>

Try starting whichever exists:

 
bash
<code>systemctl start arcconf
<span style="color:#6e7687"># or</span>
systemctl start StorMan</code>

Then retry arcconf list.


If no daemon exists, try the direct hardware access flag

Some versions support bypassing the daemon:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> --nologs</code>

Alternative — use sg3_utils directly instead

Since arcconf is being awkward, we can interrogate the controller and disk state directly. You already have sg3_utils working:

 
bash
<code>sg_readcap /dev/sg2</code>
 
bash
<code>sg_turs /dev/sg2</code>

sg_turs sends a Test Unit Ready command — the response will tell us exactly why the controller is reporting PQual=1. It'll return one of:

  • A sense key like NOT READY with an additional code that pinpoints the cause (e.g. logical unit not configured, becoming ready, format in progress)

That sense key response is really what we need — it'll tell us whether this is a controller configuration issue or something else entirely.

 
 
 
 

neither daemon exists and the other three commands all just hang obviously waiting on the controller being ready? Getting to the point I think where I just pack up the SAS drive bought by mistake and return it along with the other to orders for the controller and the cable. The annoying thing is that the controller IS seeing the drive and also the two SATA drives I moved over. I just need something can actually TALK to it to find out just what it's problem is. It did screw the partition tables on the SATA drives but fortunately I was able to recover both and now I suspect that the card is trying to build an array when I just need access to the raw disk :(

20:45
 
 
 
 

Your suspicion is almost certainly correct. The ASR-5405Z is a hardware RAID controller — it fundamentally wants to manage arrays, not present raw disks. That PQual=1 "temporarily unavailable" is classic behaviour of it trying to initialise or build something in the background, and it messed with your SATA partition tables for exactly that reason — it was incorporating them into its own metadata.

The core problem

The 5405Z has no JBOD/HBA passthrough mode in its firmware. Unlike HBA-class cards, it always sits between the OS and the disks managing them itself. Without arcconf working you have no way to tell it to stop and just present the disk — you're essentially locked out of the controller's brain.

Honest assessment

Given that it:

  • Corrupted your SATA partition tables
  • Hangs any tool that tries to talk to it
  • Has no working management utility on Tumbleweed
  • Has no JBOD mode

Returning it is genuinely the right call. It's an enterprise RAID card that really wants to live in a Windows Server or RHEL environment with full Adaptec Storage Manager support.

What you actually want instead

For a Tumbleweed text server where you just want raw SAS disk access, look for an HBA (Host Bus Adapter) not a RAID controller:

  • LSI 9207-8i or 9211-8i — these are the classic choices, very well supported by the mpt3sas driver which is solid on Tumbleweed, true passthrough, disks appear immediately as /dev/sdX
  • LSI 9300-8i — slightly newer, same excellent Linux support
  • All of these can often be found secondhand for £20-40

That mpt3sas driver just works — no daemons, no management tools needed, disks appear instantly.

 
 
 
 

Information I had tried to sort with Llama 4 which failed miserably ... LSI 9207-8i annoyingly is on a long delivery from china ... LSI 9300-8i is £45 ... AH 9211-8i is only £35 ... Pity none of them do 12GHz only 6 that would make it more palatable :) however I'll have another dig tomorrow in microchip as I see older builds of arcconf now!

20:59
 
 
 
 

Ha, yes the 6Gbps ceiling is the price you pay for going secondhand budget! In reality though for a file server with spinning SAS disks you'll never saturate 6Gbps anyway — a single ST8000NM0075 maxes out around 2.5Gbps sequential so you have headroom to spare.

Older arcconf is worth trying

The key thing to look for is arcconf version 6.x or 7.x — these predate the daemon architecture and ran as a true standalone binary that talks directly to the hardware. If you can find one of those builds it should just work without any service dependency and might be enough to tell the controller to stop trying to build arrays and release that disk.

Look for filenames like:

 
<code>arcconf_6_xx_xxxxx_linux.tar.gz</code>

rather than the RPM packages which tend to be the newer daemon-based versions.

The 9211-8i at £35 is the sweet spot

Genuinely the community favourite for exactly your use case — mpt3sas driver, zero configuration, disks just appear. Many people also flash them to IT mode firmware which bypasses even the onboard RAID logic entirely, though it likely ships that way already for that price.

Good luck with the Microchip dig tomorrow — if you do find an older arcconf that actually talks to the controller it would be worth knowing what state it reports that ST8000NM0075 in, even if just out of curiosity before you box everything up!

 
 
 
 

Typical ... V3 ... Controllers found: 0 ... bedtime!

21:05
 
 
 
 

Ha! "Controllers found: 0" — arcconf has basically thrown its hands up and gone home too 😄

That's actually useful information in a backwards way — even the old version that should talk directly to hardware can't see it, which suggests the aacraid driver has the controller locked up in a way that blocks any out-of-band access. A classic catch-22.

Cut your losses, get some sleep, and order the 9211-8i tomorrow. For £35 and plug-and-play Linux support it's going to save you a lot of grief compared to wrestling with this Adaptec any further.

The ST8000NM0075 is a nice drive — 8TB Seagate Exos enterprise SAS, it'll be very happy behind a proper HBA. Worth keeping even if you return everything else.

Night! 🌙

 
 
 
 

srv9:/usr/Arcconf # modprobe -r aacraid modprobe: FATAL: Module aacraid is in use. ... bed :)

21:07
 
 
 
 

Ha! Even the kernel won't let you rage-quit cleanly — the aacraid module is clinging on for dear life 😄

That's actually the controller's last laugh — it's got its hooks into something and won't let go without a full reboot, which you wisely don't want to risk at this hour on a text-only server.

Leave it all alone overnight, order the 9211-8i in the morning, and everything will look better after some sleep. The server isn't going anywhere!

Night! 🌙

 
 
 
 

yast

21:12
 
 
 
 

Ha! Autocorrect/fat fingers strikes at bedtime 😄

Night! 🌙

 
 
 
 

OK I have a working arcconf and I can see CREATE which I assume is what I need to do next even if only creating a raid on one disk?

07:56
 
 
 
 

Yes! A single disk RAID 0 (sometimes called a "simple volume") is exactly the trick to get the controller to present it as a block device. Try:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">1</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>

Where the 1 0 0 at the end is channel/id/lun from your sg_map output. But first, paste the output of:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> PD</code>

So we can confirm the exact device numbers for that ST8000NM0075 before we tell it to do anything — don't want it grabbing one of your SATA drives by mistake given its previous partition table adventures! 😄

 
 
 
 

---------------------------------------------------------------------- Physical Device information ---------------------------------------------------------------------- Device #0 Device is a Hard drive State : Ready Block Size : 512 Bytes Supported : Yes Transfer Speed : SAS 3.0 Gb/s Reported Channel,Device(T:L) : 0,0(0:0) Reported Location : Connector 0, Device 0 Vendor : LENOVO-X Model : ST8000NM0075 Firmware : LC77 Serial number : ZA19KPP00000R817QGA2 World-wide name : 5000C50094E9920C Reserved Size : 2135384 KB Used Size : 0 MB Unused Size : 7628800 MB Total Size : 7630885 MB Write Cache : Enabled (write-back) FRU : None S.M.A.R.T. : No S.M.A.R.T. warnings : 0 Power State : Full rpm Supported Power States : Full rpm,Powered off SSD : No maxCache Capable : No maxCache Assigned : No Temperature : Not Supported ---------------------------------------------------------------- Device Phy Information ---------------------------------------------------------------- Phy #0 PHY Identifier : 0 SAS Address : 5000C50094E9920D Attached PHY Identifier : 2 Attached SAS Address : 50000D11034E1700 Phy #1 PHY Identifier : 1 SAS Address : 5000C50094E9920E

 
07:57
 
 
 
 

Perfect — the disk is Ready and fully visible to the controller. Device is at Channel 0, Device 0. No ambiguity at all, and crucially Used Size: 0 MB so it hasn't been touched.

Now create a single-disk RAID 0 to get it presented as a block device:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>

Then watch:

 
bash
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20
lsblk</code>

You should see a new /dev/sdX appear almost immediately. The controller is in a clean state with that disk so this should be straightforward — no array building or initialisation needed for RAID 0 on a fresh disk.

Fingers crossed for a /dev/sde appearing! 🤞

 
 
 
 

The selected RAID level is not supported by this controller or this controller mode.

07:58
 
 
 
 

Ah, the 5405Z is being picky! Try these variants one at a time:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>

(without the r0 — some versions use positional args)

If that fails:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max simple <span style="color:#008080">0</span> <span style="color:#008080">0</span></code>

If that fails, check what RAID levels it actually supports:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> AL</code>

That lists Available Logical device configurations. It'll tell us exactly what this controller is willing to do — some of the 5405Z firmwares are locked to RAID 1/5/6/10 only and won't do RAID 0 at all, which would be a very Adaptec enterprise thing to do! 😄

 
 
 
 

Incorrect number of disks for the requested RAID type. to max 0 0 0

07:59
 
 
 
 

Getting closer! It wants more disks for whatever it's defaulting to. Try explicitly specifying RAID 0 with the correct syntax:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">0,0</span></code>

Or:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max r0 <span style="color:#b34a00">drives</span><span style="color:#14181f">=</span><span style="color:#008080">0,0</span>,0,0</code>

But first, let's see what it actually supports:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> AL</code>

That output is really what we need now — no point guessing syntax if it fundamentally won't do single disk RAID 0. It may only want to create volumes with minimum 2 or 4 disks, which would explain both errors you've hit. 😄

 
 
 
 

do you want all of that or just a particular block?

08:01
 
 
 
 

Just the summary of supported RAID levels and minimum disk counts — so the top section showing available types should be enough. Anything that lists RAID levels and their constraints is what we need! 😄

 
 
 
 

Not sure I am seeing that, the RAID block just has RAID Properties -------------------------------------------------------- Logical devices/Failed/Degraded : 0/0/0 Copyback : Disabled Automatic Failover : Enabled Background consistency check : Disabled Background consistency check period : 30 and the connected lane Lane #0 Channel ID : 0 Device ID : 0 SAS Address : 50000D11034E1700 PHY Identifier : 2 ------------------------------------- Lane SAS Phy Information ------------------------------------- SAS Address : 50000D11034E1700 Attached PHY Identifier : 0 Attached SAS Address : 5000C50094E9920D Negotiated Logical Link Rate : PHY enabled - 3 Gbps

08:05
 
 
 
 

Hmm, no RAID level list there. Try:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive <span style="color:#b34a00">help</span></code>

Or just dump everything:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> AD</code>

That's the Adapter config block and may show supported RAID levels.

Also worth trying — the syntax that older arcconf versions preferred:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> noprompt</code>

And just out of curiosity:

 
bash
<code>arcconf getconfig <span style="color:#008080">1</span> LD</code>

To confirm there really are zero logical devices currently — because if the controller secretly already created something in the background during all that earlier hanging, that might explain the "incorrect number of disks" error too! 😄

 
 
 
 
 
 

AD gave the same initial stuff as AL The create failed and gives pasted

08:09
 
 
 
 

Excellent — the help output is gold! Two important things:

1. JBOD is supported! That's better than RAID 0 for your use case:

 
bash
<code>arcconf create <span style="color:#008080">1</span> jbod <span style="color:#008080">0</span> <span style="color:#008080">0</span> noprompt</code>

That's channel 0, device 0 — exactly what your PD config showed.

2. If JBOD fails, the correct RAID 0 syntax from the help is:

 
bash
<code>arcconf create <span style="color:#008080">1</span> logicaldrive max <span style="color:#008080">0</span> <span style="color:#008080">0</span> <span style="color:#008080">0</span> noprompt</code>

That's max size, RAID level 0, channel 0, device 0.

Try the JBOD one first — it's the cleanest solution, presents the disk with zero controller interference, and is exactly what you want for direct single disk access. If it works you should see /dev/sde appear almost immediately! 🤞

 
 
 
 
 
 

Created JBOD: 0,0 Command completed successfully.

08:11
 
 
 
 

YES! Now check:

 
bash
<code>lsblk
<span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -20</code>

You should have a shiny new /dev/sde waiting for you! 🎉

 
 
 
 

[38979.076011] [ T6954] sd 0:1:0:0: [sde] 15623761920 512-byte logical blocks: (8.00 TB/7.27 TiB) [38979.076022] [ T6954] sd 0:1:0:0: [sde] 4096-byte physical blocks [38979.076963] [ T6954] sd 0:1:0:0: [sde] Write Protect is off [38979.076974] [ T6954] sd 0:1:0:0: [sde] Mode Sense: df 00 10 08 [38979.078613] [ T6954] sd 0:1:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA [38979.399020] [ T6954] sd 0:1:0:0: [sde] Preferred minimum I/O size 4096 bytes [38979.776170] [ T6954] sd 0:1:0:0: [sde] Attached SCSI removable disk and sg_map has sde against sg2 :) So do I just drop into partitioner and set this up as /media3 now?

08:13
 
 
 
 

BRILLIANT! 8TB Exos is alive and kicking! 🎉

Yes, straight into partitioning now — it's just a normal disk from here:

 
bash
<code><span style="color:#0051c2">parted</span> /dev/sde mklabel gpt
<span style="color:#0051c2">parted</span> /dev/sde mkpart primary ext4 <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>

Then format:

 
bash
<code>mkfs.ext4 /dev/sde1</code>

Or if you prefer xfs (better for large files on a file server):

 
bash
<code>mkfs.xfs /dev/sde1</code>

Then mount:

 
bash
<code><span style="color:#0051c2">mkdir</span> -p /media3
<span style="color:#0051c2">mount</span> /dev/sde1 /media3</code>

And make it persistent — get the UUID:

 
bash
<code>blkid /dev/sde1</code>

Add to /etc/fstab:

 
<code>UUID=your-uuid-here  /media3  ext4  defaults  0  2</code>

What filesystem are your other media volumes using? Worth staying consistent if you have a preference. 😄

 
 
 
 

I'd normally use ext4, but this will be the overflow for my recorded TV series so what difference would xfs make?

08:15
 
 
 
 

For recorded TV specifically, XFS has some real advantages:

XFS wins for your use case because

  • Large files — TV recordings are typically 2-8GB per episode, XFS was designed for exactly this, large sequential writes and reads
  • No slowdown over time — ext4 can get sluggish as a large volume fills up, XFS maintains performance consistently even at 90%+ full
  • Faster deletion — clearing out old series to make space is noticeably quicker on XFS
  • No reserved blocks — ext4 reserves 5% for root by default, on an 8TB disk that's 400GB wasted. You can fix that with tune2fs but XFS doesn't have the problem at all

ext4 wins on

  • Shrinking — you can't shrink an XFS volume, only grow it. Probably irrelevant for your use case
  • fsck recovery — marginally more mature recovery tools if something goes badly wrong

Verdict

For an 8TB overflow store for large TV recording files, XFS is the better choice. The 400GB saving alone is worth it, and the large sequential file performance is just a better fit.

 
bash
<code>mkfs.xfs /dev/sde1</code>

And in fstab use xfs instead of ext4 obviously! 😄

 
 
 
 

and on the parted step?

08:16
 
 
 
 

The mkpart line just needs the filesystem type changed — parted only uses it as a label hint anyway, the actual filesystem is created by mkfs:

 
bash
<code><span style="color:#0051c2">parted</span> /dev/sde mklabel gpt
<span style="color:#0051c2">parted</span> /dev/sde mkpart primary xfs <span style="color:#008080">0</span>% <span style="color:#008080">100</span>%</code>

Then:

 
bash
<code>mkfs.xfs /dev/sde1</code>

That's it! 😄

 
 
 
 

I dropped into yast partitioner ;) 'creating partition now ...

08:19
 
 
 
 

Ha, fair enough — YaST is right there and does the same job with less typing! 😄

It'll handle the UUID and fstab entry automatically too which saves a step. Let us know when it's done and mounted! 🤞

 
 
 
 

I know it will take a while ;) Just to tidy up, I am wondering if it's worth dropping back to V4 of arcconf since I can easily restore the working copy. I am working on the assumption that all the messing with disk drives on the controller were just a hinderance, and all I really needed to do was to wait for the controller to respond to being ready. The one thing I'd forgotten is that I've a 4 slot cage, but only had three SATA ports, so having the 4 channel SAS extra channels means I can add another SAS drive and just wait for it to become ready ;)

08:34
 
 
 
 

Good thinking on the tidying up. On the arcconf version question — I'd actually keep the working version rather than drop back to V4. Your reasoning is sound that the controller just needed time to initialise, but having a working arcconf that can actually talk to the controller is genuinely valuable insurance for:

  • Adding that second SAS drive when it arrives — you now know the JBOD creation command that works
  • Any future controller weirdness
  • Checking drive health and status

V4 giving "Controllers found: 0" makes it essentially useless, so there's no real benefit in going back to it.

On the 4 slot cage point

That's a nice bonus you'd forgotten about! When the new SAS drive arrives the workflow is now sorted:

  1. Fit the drive
  2. rescan-scsi-bus.sh
  3. Wait for it to appear in arcconf getconfig 1 PD as Ready
  4. arcconf create 1 jbod 0 X noprompt (where X is its device ID)
  5. Partition and format

You've basically turned a frustrating evening into a documented procedure for the next one! 😄

 
 
 
 

That newer software versions seem to forget the basics seems to be something of a fact today. I think the Controllers found: 0 was actually after I had installed V3. The V4 version was simply waiting for the controller to become ready, while the old one responded but simply could not access it? Now that the controller is responding, V4 will probably work, but you are perhaps right that adding a second disk may screw things up again until the controller has processed it ... poor design on the part of the hardware to software interface?

08:40
 
 
 
 

That's a really astute observation and you're probably right on all counts. The timeline makes more sense that way:

What likely actually happened

  • V4 — daemon-based, was patiently waiting for the controller to become ready, hence the hanging
  • V3 — direct hardware access, could see the controller wasn't ready and just reported "Controllers found: 0" honestly
  • Working version — caught the controller at the right moment after it had finally finished whatever initialisation it was doing in the background

On the poor design point

Absolutely — a well designed controller/driver interface should:

  • Report why it's not ready rather than just blocking
  • Have a clear "initialising, please wait" status visible to the OS
  • Not silently grab disks and mess with partition tables during that phase
  • Not need a separate management utility just to present a single disk

The PQual=1 "temporarily unavailable" you saw at the very start was actually the controller trying to tell you this, but it's such an obscure SCSI status code that nothing surfaces it helpfully to the user.

It's classic enterprise hardware designed for environments where an admin has a full Adaptec Storage Manager GUI running on Windows Server — bolted onto Linux as an afterthought. The LSI/HBA approach of just presenting disks directly to the OS is so much more elegant! 😄

 
 
 
 

NOW while I'm waiting on the format which is STILL running ;) I can go back through this thread and pull it all together as a crib sheet for others. The large second hand SAS disks are currently a lot cheaper than SATA ones, and so the £21 to get 4 SAS channels available is attractive.

08:44
 
 
 
 

That's a great idea and would genuinely help people — this thread covers a really complete real-world journey that documentation never does:

What the crib sheet should capture

  • The controller appearing in dmesg/sg_map but no block device being created is normal initial behaviour — just wait
  • sg_inq /dev/sgX to check PQual status
  • The arcconf version maze — V3 can't see it, V4 hangs waiting, find the middle ground
  • The critical JBOD creation command that actually works
  • The getconfig 1 PD check before creating anything to confirm device IDs
  • The XFS recommendation for large media files and the 400GB saved on 8TB

On the value point

You're absolutely right about the economics. Right now on eBay:

  • ASR-5405Z or similar 4-channel SAS card — £15-25
  • 8TB Seagate Exos SAS drives — considerably cheaper than equivalent SATA
  • SFF-8087 cable — a few pounds

The gotcha that nobody documents is that the card needs arcconf to create a JBOD before Linux sees anything — which is exactly what your crib sheet would solve for the next person who hits this wall! 😄

Is the format still churning away?

 
 
 
 

How long could it take to format an 8Tb disk? It is a little annoying that there is no actually indication it is still running :(

09:00
 
 
 
 

For XFS on 8TB it should actually be very fast — XFS format is nearly instantaneous as it doesn't pre-write anything, usually under 30 seconds even on a large disk.

The fact it's still running suggests YaST may be doing something more than just mkfs.xfs — it possibly selected:

  • ext4 with full initialisation
  • A slow/thorough format option
  • Or it's doing a partition check/wipe first

Check what's actually happening

In another terminal:

 
bash
<code><span style="color:#0051c2">ps</span> aux <span style="color:#14181f">|</span> <span style="color:#0051c2">grep</span> -E <span style="color:#008000">'mkfs|mke2fs|format|yast'</span></code>

And:

 
bash
<code><span style="color:#0051c2">dmesg</span> <span style="color:#14181f">|</span> <span style="color:#0051c2">tail</span> -10</code>

That'll tell you what process is actually running and whether it's still active or has silently finished. YaST sometimes completes the underlying operation and just sits on the progress screen without being obvious about it! 😄

How long has it been running now?

 
 
 
 

root 7309 0.0 0.0 7208 5372 pts/0 S+ 08:17 0:00 sh -c -- /sbin/yast partitioner >&2 root 7310 0.0 0.0 7348 5628 pts/0 S+ 08:17 0:00 /bin/bash /sbin/yast partitioner root 7337 0.1 0.2 718796 93080 pts/0 Sl+ 08:17 0:03 /usr/bin/ruby.ruby4.0 --encoding=utf-8 /usr/lib/YaST2/bin/y2start partitioner ncurses -name YaST2 -icon yast and it's now 9:04 Sod it ... it needed another Next to actually start ;) as you say ... it's fast

09:06
 
 
 
 

Ha! 47 minutes staring at a screen waiting for something that hadn't actually started yet! 😄

Classic YaST — it was just sitting there politely waiting for you to tell it to actually go. At least the XFS format itself will be done in seconds and you'll have your 8TB mounted before the kettle boils!

 
 
 
 

df -> .dev.sde1 /media3, but 2% used?

09:07
 
 
 
 

That's XFS metadata and the journal — completely normal. On 8TB that 2% is roughly 160GB which sounds alarming but it's just how XFS structures itself for a volume that size. You're not losing anything usable in practice.

Run:

 
bash
<code><span style="color:#0051c2">df</span> -h /media3</code>

That'll show you the actual available space in human readable form — you should see around 7.2-7.3TB free. 😄

 
 
 
 

Now that is tidier ;) and 143G used which is nothing when an '8Tb disk' only formats up as 7.3Tb ...

09:10
 
 
 
 

Ha yes, the eternal "8TB" deception! 😄

Where the space goes

  • Manufacturers count in decimal — 8TB = 8,000,000,000,000 bytes
  • Linux counts in binary — so it shows 7.3TiB (tebibytes)
  • That gap is ~700GB just from the decimal/binary difference, completely normal
  • Then XFS metadata/journal takes the 143GB on top

So you're not actually losing anything — it's just marketing maths vs real maths. Every "8TB" disk formats to ~7.3TB on Linux.

Net result

You have roughly 7.1TB of usable space for your TV recordings which at typical broadcast quality is probably:

  • 2,000+ hours of HD recordings
  • Or several complete series of everything you'd ever want to keep 😄

Ready to write that crib sheet?

 
 
 
 

Morning physio and then get stuck in. In the meantime I can leave srv9 copying 2Tb from /media2 to /media3 and leave that space to allow TV Shows to expand on media2 ;)

09:13
 
 
 
 

Perfect plan! 2TB copy will take a while but it'll chug away happily in the background. Kick it off with:

 
bash
<code><span style="color:#0051c2">cp</span> -av /media2/TVShows /media3/ <span style="color:#14181f">&</span></code>

The -v gives you a running log of what's being copied, and & puts it in the background so you can log out safely. Or if you want to be able to check progress later:

 
bash
<code><span style="color:#0051c2">nohup</span> <span style="color:#0051c2">rsync</span> -av --progress /media2/TVShows /media3/ <span style="color:#14181f">&</span></code>

rsync is better for large copies — if anything interrupts it, it can resume where it left off rather than starting over.

Check progress anytime with:

 
bash
<code><span style="color:#0051c2">watch</span> -n <span style="color:#008080">60</span> <span style="color:#0051c2">df</span> -h /media3</code>

Enjoy physio, good luck with the crib sheet, and nice work getting there in the end — Llama 4 would still be scratching its head! 😄

 
 
 
 

/Archive but yep and I just leave it running on ssh in a terminal window ;)

09:15
 
 
 
 

Ha of course, /Archive! And yes just leave the ssh terminal open and let it run — srv9 will quietly get on with it while you're at physio. 😄

Enjoy the session!