Category Archives: Hardware

Replacing my DIY NAS with an Ugreen NASync DXP 8880 Plus

Yesterday my NAS replacement hardware arrived.

Previously I was using an Intel Avoton C2750 board which failed years later with a written to death eeprom.
Then I replaced it with an Intel Atom C3558 board where it was promised that that should not happen… I wont wait…

There was a good offer, and I wanted to migrate to SSDs anyway.
The Ugreen NASync DXP 8880 plus, has 8 SATA bays for 3.5/2.5 disks,
as well as 3 onboard m.2 NVMe slots and an PCIe extension slot where an dual NVMe expander card can be installed. CPU is an i5 1235U with 10 cores.

As soon as the memory extension and extender card arrives, I‘ll post some more pictures.

Replaced the fans with some silent Noctua NF-A12x25, and replaced the RAM with 2x32GB Crucial DDR5 4800MHz SODIMM 1.1V

Now I’m still waiting for my SSD drives to be delivered.
8x Western Digital Red NAS SSD SA500 4TB and 4x WD SN850X 8TB NVMe’s.

For now I keep the 128GB boot NVMe, but it could be replaced too and hold another pool, we’ll see.

2x 8TB RAID1 on MacOS via Thunderbolt 4

Since I’m putting huge loads on my MacStudios, I wanted to get the read/write load away from the internal SSDs and get a bit more redundancy in case an SSD fails.

All my containers running in Podman are now placed onto a SSD RAID1 on two Western Digital SN850X 8TB NVMe m.2 SSDs which I placed into an Anyoyo dual NVMe fan Thunderbolt-4 casing.

I’m still not sure if I should put all LLM model caching directories onto this RAID1 and export it from there, or put it on the main NAS and mount it from there via 10GB/s. Also MacStudio/Link via 80GB/s TB5 would be an option, but currently I don’t wont ip forwarding there…

Yet another Mac Studio M3 Ultra this time with 512GB

Did I say enough…..
meh….

Okay lesson learned, never enough.
My “1 month old” M3 Ultra 256GB went out of memory running all those models and podman containers in parallel.



My current setup is:
OpenWebUI:
-> LM Studio, Aya-Vision-32b,
-> ComfyUI workflow with t5xxl, Flux1.dev, llama-3.1
-> MCP proxified: Searxng, wikipedia, docling, context7, time, memory, weather, sequential-thinking
Podman: 24 containers including supabase, wikijs, watchtower…

Also I discovered that I can use OpenWebUI, SwarmUI, exo and even mlx
to distribute workload across both Mac Studios via 80GBs thunderbolt 5 bridging.

And with the orange clown, you never know if there will be a new M4 Ultra next year at all.

Upgraded my old M1 mini to Mac Studio M3 Ultra 256GB

Finally the latest Mac Studio was released, unfortunately only with M3 chip instead of a M4, but I simply waited too long already.
Let’s see how this one compares to the previous M4 Pro I bought for my son in regards to LLM’s.

This one will be used with Podman Desktop, LM Studio and hopefully be fast enough to also handle voice recognition and rendering in realtime.

256GB VRAM will be more than enough for my use cases, 96GB was simply not enough as I already saw that on the 64GB M4.

M4 Pro mini for my son

Time to pass along one of my old M1 minis to my parents, and get a newer M4 Pro for my son, and of course further LLM testing.

So I decided to get the maximum configuration with 64GB and the pro version with highest frequency and core count.

Not sure if it will be sufficient for video rendering in ComfyUI as well, let’s see.
Overall I expect a 4 times faster speed for across all scenarios.

AMD Ryzen 9 5900XT

The final upgrade for my old X570 AMD platform.
Since I already upgraded to an Nvidia, and LLM power is still not enough, I now try a 32 threads CPU. (low power of course)

I also invested first time in a closed system water cooling from Arctic Liquid Freezer III 360.

And I also updated RAM to max 128GB.

This is the end for that Taichi X570 motherboard, probably a goodbye to the PC platform finally, as Apple can now even cleanly emulate DirectX in Metal.


And already the LLM knows what’s best for me 😀

no need to write 5000 words, I can do it in less 😀 damn poser LLM

Palit GeForce RTX 4080 GamingPro 16GB

Just bought a Nvidia 4080 as the ROCm bullshit on AMD is really driving me crazy. Every machine learning software I try has issues when using ONNX or other openCL capabilities, and also VRAm is always an issue even with my old and fast Sapphire Radeon RX 5700 XT 8GB.

Sorry AMD, I did like your power efficiency, but your software skills are just horrible.