I definitely need to find a better file manager than Thunar. I am trying to sort through some files so I’ve got two windows side by side. Every time I go into a different folder, the columns change width, and that’s even after I turned off the auto-adjust option. I’ve got the date modified as the last column on the right, and every time this happens, that column moves sideways and obscures part of the timestamp.
It may be due to the dimensions of the two windows not being exactly the same down to the last pixel horizontally. I got mostly around it by putting a throwaway column on the right side and then using the “open in new window” command to have the OS create an identically-sized window. Now when it decides to resize the columns, at least one will be a column I don’t care about, even though it’s affecting my ability to see the full file name as that column changes size.
Fighting the program to get it to stop deciding “You made a change to the visual appearance of this window, so that must mean you want the change made to all instances of this application and I’ll just do it for you” and “I’m going to resize the columns no matter what you say” is an unnecessary frustration.
And the pixel hunt to find the edge of a window is still annoying. You literally have to find the one pixel that is on the edge of the window to grab ahold of it.
I don’t know since this is Linux instead of Windows. I also forgot to add that the Thunar file manager doesn’t understand the concept of visually separating each column. A right-aligned column for file size followed by a left-aligned column for modification date looks like one big column.
It’s Ubuntu Studio. Since it wants to do some updates, this is a good time to power down the computer and get the drive plugged into a SATA port to see how it does.
I’ll take “Things that have a simpler answer” for $200.
The answer is: This will sometimes cause a computer to not be able to boot.
“What is a failing hard drive”?
That is correct.
I’m not quite convinced this is it, though. The drive’s only a year old. I’m going to put it back in the external drive enclose and do some more testing over on the other computer to see what it does.
Yup, and no effect. I had given it a stress test of about 370GB of misc files last night. Quite a few smaller files but some that are a few GB in size. It said it would take about 3 hours to finish. This morning, it had reached 54% and still had three hours to go. Right now, it’s at 80% and still says there’s three hours to go.
The same drive didn’t have any problems on the Windows 10 computer. It’s gotta be something with this motherboard or in Linux itself. It seems like the larger number of files it has to copy, the slower it goes as it progresses. I wish the “copying files” display would show me the speed it’s copying at like Windows does.
The problem is, now that files are being copied, I don’t think I can stop it. Last time I tried, it almost wouldn’t even respond to stop and I had to format the hard drive to get it to be usable again.
Rough guess is that maybe by tomorrow night or on Sunday, it will finish up and I can try smaller batches to see how they do. If I can get the two computers to talk to each other across a network share, that might even be faster.
Both Thunar (the default File Manager) and with PCManFM slow to a crawl when they have a lot of files to copy. I can get some of the speed back by copying one folder at a time so it has less to copy at one time.
Copying to a shared folder across the network started out faster, which it shouldn’t be because that other computer is temporarily on a 100 MB/s switch and USB 3.0 should be faster. It’s doing the same thing as Thunar. Started at 40 MB/s, slowed down to about 6 and occasionally goes back up to 12.
What in the world is going on with this computer. I have a lot of data I need to move to the Windows computer. At this rate, it may be Christmas before this gets done.
This is not helpful. PCManFM is only counting the current folder level when I select properties. It’s not going into the sub-folders. It thinks I only have 3GB of data on the drive when it’s actually over 1TB.
I’m glad I never made this computer into a file server. Getting files into it was easy. Saying that getting files out of it is problematic doesn’t even come close.
cd [source directory]
tar cpf - | (cd [destination directory]; tar xpvf -)
In a nutshell - recursively tar (TApe aRchive) the current directory but stream the output to standard-out, directed into a pipe (|) where you cd to the destination directory and then verbosely extract the data from standard-in to the current (destination) directory.
That will run as fast as it possibly can, and give you a “status” update as each file is written.
My old place had Linux boxes and some lingering tape drives as of 2015. Sure, it was part of the Data Center of Forgotten Technology, but we still had it.
My wife’s work does backups to tape. They have managers take them off-site (which is good!) but until I explained it to her, they used the same set of tapes for years. I don’t know if its as common with modern tech, but I’ve heard that tapes degrade after years. They did get some new tapes, and I think their vendor is pushing them to do Cloud backups anyway.
My wife and her bosses are all old enough to be of the cassette tape & VHS era… They should really know how magnetic tape degrades over time!
One more bonus tape story, not hat anyone was asking:
Oldjob, when I started back in 1998, also had a Minicomputer from a company that went out of business in the early 90s. Big dishwasher-sized cabinets, with one for the CPU and a couple for large reel to reel tape drives! The tape drives were garbage, but also beasts: They used big diameter reels with a vacuum mechanism to pull the leader (the first part of the tape) down and load it. So to add to all the other failure states of decade-old tech from a defunct company, add ‘vacuum pump failure.’ Cool to watch: Very 70s sci-fi. And apparently surprisingly fast for data transfer, actually, which is why tape drives still have some value.
To my understanding in most cases the records on these was stored in some incredibly simple formatting, so even if you lost a foot or two of tape, you’d just lose a few records at the front of the tape.
Someone had also connected a more standard-ish tape drive to the thing. Imagine the common ‘squashed shoebox’ style enclosure common in the 90s/00s. Probably held as much if not more capacity than the floor-standing tape drives… And easier to deal with.
I figured out what’s happening. Now I need to understand why it’s happening.
The problem is the partition size when it’s formatted with NTFS. I had a 2TB drive, created a 1.6TB partition and the other partition was the remaining 224GB. Big partition causes the slowdown. Small partition does not.
Replicated it on the other computer. Took the 4TB drive which had the network share I was testing and shrank the partition down to 100GB. Speed picked up simply by changing the partition size.
Both the drive connected via USB cable and the one shared across the network are now receiving files pretty consistently at 40-60 MB/s. There’s a few slowdown, but it doesn’t take long to come back up. That’s doable.
What is it about Linux writing files to an NTFS partition that makes the partition size such a critical factor?
NTFS is a microsoft file system. Reading is supported but writing is a totally different situation. Linux in general is not real crazy about NTFS. Here’s a link, different distribution but you get the idea…
How DO businesses back up their data now? Is it all just being shuffled back and forth across various clouds?
How do normal people do backups?
I haven’t done systematic backups in years and years. I mostly just hope. But music production files take up enough space (and occasionally are of enough merit) that it may be worth considering again.