|
Forum Index : Microcontroller and PC projects : Rescuezilla down to 340kb per second....
| Author | Message | ||||
Grogster![]() Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9851 |
I don't get this. Trying to backup my Linux Mint system, and the process went fine, but the copy-rate drops down to 340 KILOBYTES per second after about 10 mins. I can PISS faster then that. Source is ADATA 1TB nvme M.2, destination is Kingston USB3 SSD, with write rate of 900MB/s It STARTS off fab, but gets slower and slower, till it settles at 340K per second, with a predicted finish time of about 78 hours from now........for a 60GB setup. WTF ?!?!!! USB SSD's are SUPPOSED to be WAY faster then USB flash-drives, so WHY the slow-down? USB3 should be able to do WAY more then 340K per second. Hell, that is less then USB1 speed..... I plan to try an nvme M.2 drive in a USB3 external enclosure tomorrow, but to say I am disappointed with the copy speed of a so-called USB3 SSD(the Transcend thing), would be the understatement of the year at this point. Does anyone have any insight? USB3 SSD's SHOULD be able to AT LEAST sustain USB3 write speeds. ![]() Smoke makes things work. When the smoke gets out, it stops! |
||||
| phil99 Guru Joined: 11/02/2018 Location: AustraliaPosts: 2951 |
I get a similar thing with high capacity flash drives. For small data writes they are very fast, but when writing a big block of data they soon slow to a crawl. My guess is they have some fast cache memory to buffer the data but when that is full you get to see the real speed of the bulk storage. Perhaps some SSDs use the same trick. For a given capacity there is a big difference in prices. Maybe the most expensive ones have a higher sustained write speed. |
||||
Grogster![]() Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9851 |
Yeah, I'm beginning to wonder myself. An M.2 SSD in a USB3 external case I think might prove a point. But you pay a premium, for "USB3 SSD's" over "USB3 Flash Drives"..........are we all being ripped off, I wonder..... I'll do my M.2 SSD test tomorrow. Smoke makes things work. When the smoke gets out, it stops! |
||||
Grogster![]() Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9851 |
I did another test - a 32GB OS image file to the USB3 Transcend SSD. I copied this to the M.2 nvme drive first, for maximum transfer performance. THIS DID COPY fine at about 500MB/s, so the issue might seem to be with the Rescuezilla software perhaps. ![]() Smoke makes things work. When the smoke gets out, it stops! |
||||
| ville56 Guru Joined: 08/06/2022 Location: AustriaPosts: 378 |
I also use some NVMe N.2 in an external USB3 case as backup destination. Use rather expensive SSD modules with high specified datarates and get also a drop in transfer rate after some time when cache is full. But the datarate is still way over 150 Mb/sec. Have also a cheapo module that did deliver very slow rates, only usable as faster USB stick. Can recommend samsung, the modules more to the top of the line models with the pro postfix. 73 de OE1HGA, Gerald |
||||
Grogster![]() Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9851 |
Thank you. I will do another test tomorrow, to a few different mediums. The thought occurs: A Linux system has HUNDREDS OF THOUSANDS of small config files. Even MORE then Windoze does. These files are VERY SMALL, but there are HUNDREDS OF THOUSANDS of them. Perhaps that is slowing things down? Just the process of copying all of those? Smoke makes things work. When the smoke gets out, it stops! |
||||
| phil99 Guru Joined: 11/02/2018 Location: AustraliaPosts: 2951 |
Tomorrow you can test that too. Compare a 2GB .iso image with 2GB of little files. |
||||
| ville56 Guru Joined: 08/06/2022 Location: AustriaPosts: 378 |
Small files always slow down transfer rates as not only the data has to be transfered but also metadata to keep directories and block allocation up to date. This is why I use backup programs that generate a single file as output as far as possible. 73 de OE1HGA, Gerald |
||||
| JohnS Guru Joined: 18/11/2011 Location: United KingdomPosts: 4215 |
When I want to backup a large amount of smallish files I use a pipeline which gathers them into a container, commonly then compresses, may well transfer across my LAN to a destination. E.g. gather using tar or cpio, compress (tar flag or such as xz), pipe to ssh to destination. That way any metadata for files is handled by tar (or cpio). You can later restore all or extract individual files. The compression slows things a bit but the size written is reduced (a lot, with small or text-type files) so you win overall. If your destination happens to be fast (such as a hard drive) and big enough you can skip the compression if you like. John Edited 2026-01-22 18:12 by JohnS |
||||
| KeepIS Guru Joined: 13/10/2014 Location: AustraliaPosts: 2005 |
FYI: Linux System Snapshot compressed 62GB takes 8 minutes including compress and CRC checks. SSD has 500Mb read and 390Mb write, it slows to 290Mb sec writing at the end. Lucky backup of Home directory writing 34.8G bytes to the same SSD also runs at 290M bytes sec and takes 1.9 minutes, which includes processing for larger block writing. In both cases the SSD drive is maxed at 100% when writing. An External MM2 drive in a USB-C housing runs at over 900M bytes sec writing if backing up from the MM2 main drive, neither is anywhere near maxing out, likely internal bus, Drivers and hardware limiting speeds . Edited 2026-01-22 19:03 by KeepIS NANO Inverter: Full download - Only Hex Ver 8.2Ks |
||||
| Andy-g0poy Regular Member Joined: 07/03/2023 Location: United KingdomPosts: 83 |
>I don't get this. > >Trying to backup my Linux Mint system, and the process went fine, but the copy-rate >drops down to 340 KILOBYTES per second after about 10 mins. 340Kbytes is about 2.7 Mbits p/sec Be careful on the specs of the devices, they usually state in bits as that appears to be faster :-) Also read speeds are much faster than write speeds so they may also be quoting that. As others have said, it's the cache filling up Another thing to watch out for: One slow USB device will cut the speed down to that level for all devices on that controller. 60Gb is a huge backup, and you are probably not backing things up in the most optimal manner. There is no real need to always back up the system files, as they will be restored form the source media if necessary. It is worth keeping a backup of the config directories. The rest of the system will be your own files, how often do you change them? what sections etc. I do a full back up of the system once a month, excluding /home full back up of /home weekly back up of other big files such as videos which are in a separate directory monthly, or manually if I make a lot of changes for some reason. back up /docs/mydocumnets every couple of hours. After the first backup, I don't really notice the backups taking place, it all happens in the background. I use backintime as my backup system Andy It will always take a long time for the first backup as it's shoving a lot of data about, but after than only changes are sent |
||||
| robert.rozee Guru Joined: 31/12/2012 Location: New ZealandPosts: 2488 |
hi Grogster, try the following: edit /etc/sysctl.conf using xed admin:///etc/sysctl.conf and adding the following lines to the bottom of the file: # Improve writing to external media like USB memory sticks # 64Mb / prime # close to 48Mb vm.dirty_bytes=67108864 vm.dirty_background_bytes=49999991 reboot, and see if the write speed improves. if it doesn't, just remove the added lines. what the change does is force the operating system to limit the length of continuous writes. for me, these long writes were overwhelming the flash drive being written to, with the drive responding by slowing down significantly much as you described. the problem arouse when i went from mint 21.x to 22.x, presumably with a change in the kernel. by default, i believe the newer kernels allows ALL of free RAM to be used as a buffer, giving potentially gigabytes of data being thrown at drives being written. i limiting the available buffer (to around the 56Mb) restored performance (for me, at least). cheers, rob :-) |
||||
| atmega8 Guru Joined: 19/11/2013 Location: GermanyPosts: 730 |
Use partclone in expert mode. Compression lz4. You don’t have a software problem. HDD → USB-HDD 80–150 MB/s SSD → SSD 300–500 MB/s Try an empty target, better hardware. Full disks slow down. Check/repair the filesystems. Don’t use cheap usb cables. Edited 2026-01-23 00:42 by atmega8 |
||||
Grogster![]() Admin Group Joined: 31/12/2012 Location: New ZealandPosts: 9851 |
OK, an update for all of you. I used an external USB3.2 M.2 nvme enclosure, with a 1TB ADATA 700 series SSD, and running the backup took 10 minutes from start to finish. Average write rate of 175-200MB per second, constant. Did not slow down. I know that USB3 FLASH DRIVES, can't handle a very fast sustained write speed, but as far as I knew, USB3 SSD's could. They ADVERTISE as much, with the Trancend one I bought for the purpose, saying it can do 900MB per second writing and 1.25TB per second reading. You pay MUCH more for a USB3 SSD then you do a flash-drive, so you would EXPECT the writing performance to be that much better, and you would NOT expect it to have a 'Slow-when-cache-full' kind of operation, like most flash drives. In any event, I now have my OS image backup completed, and I will now simply copy it to the USB3 SSD I had intended for the task. EDIT: Copying the OS image from the M.2 SSD, to the Transcend USB3 SSD drive took about 3 minutes for 70GB image, with an average sustained write rate of around 286MB per second. It did not slow down at all. I feel I have misjudged the Trancend USB3 SSD. It must have been something in the way that Rescuezilla was reading the source and writing to the destination. Edited 2026-01-23 15:15 by Grogster Smoke makes things work. When the smoke gets out, it stops! |
||||
| Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 8461 |
There's no way round it. Backing up a lot of small files will get slow because of all the additional data transferred (file headers etc.) and having to update the directory tree. The very fast write times quoted will be for a single small file that's smaller than the cache. Everything slows down when the cache is full, no matter what size of files you are saving. What this means is that for normal use the cache will fill but it will empty relatively quickly so day to day performance is good. Backups are not. Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
| lew247 Guru Joined: 23/12/2015 Location: United KingdomPosts: 1704 |
You "probably" already know this but all SSD's lose data over a period of time if they are not powered up now and again |
||||
| Mixtel90 Guru Joined: 05/10/2019 Location: United KingdomPosts: 8461 |
For some time I haven't bothered backing up the OS itself, only /home. It's easy enough to do a new install of linux and just point it to the drive or partition where /home is. It can be almost anywhere. Mick Zilog Inside! nascom.info for Nascom & Gemini Preliminary MMBasic docs & my PCB designs |
||||
| The Back Shed's forum code is written, and hosted, in Australia. | © JAQ Software 2026 |