I have a linux system that was installed on a computer that no longer works. The hard drive is OK, but that system no longer boots (not even using a Live CD).
I have installed that disk as a second hard drive on a second linux system. Both system are Fedora, if that matters. I can see the partitions on the drive I’m trying to mount, but I don’t know how to mount it. I’ve searched, but all I find are instructions for installing a second drive that can be formatted. I’d like to not lose the data on the second drive, so I don’t want to reformat it.
I’ve tried adding lines like below to /etc/fstab, then remounting using mount -a, but it complains about /mnt/data2 not existing. I’m pretty sure ext4 is correct, but I also tried ext3.
/dev/sdb3 /mnt/data2 ext4 defaults 0 0
I’ve also tried mounting it manually, but I also get errors (I wasn’t sure whether the directory /mnt/data2 has to exist or not exist, but either way it fails):
mkdir /mnt/data2
mount -t ext4 /dev/sdb3 /mnt/data2
mount: /dev/sdb3 is already mounted or /mnt/data2 busy
rmdir /mnt/data2
mount -t ext4 /dev/sdb3 /mnt/data2
mount: mount point /mnt/data2 does not exist
fdisk -l tells me for the drive, among other stuff
Device Boot Start End Blocks Id System
/dev/sdb1 2048 4095 1024 83 Linux
/dev/sdb2 * 4096 1028095 512000 83 Linux
/dev/sdb3 1028096 488396799 243684352 8e Linux LVM
So /dev/sdb3 is the partition I’d like to access (there’s /dev/sda… stuff as well, for the drive I’m booting from.)
It’s now on a different system than it was originally installed on, so I wouldn’t have expected it to boot correctly. Also, that doesn’t solve the problem of seeing both drives at once.
That said, I’m not sure which system actually is booting. I installed Vortexbox 2.3 on the new system, with the old drive not installed, then connected the old drive. After I made the OP, I found that all the files from the original install are showing up, but from what I can tell, I still only see one drive. Here’s what I get when I run df:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 30106576 1280240 27273952 5% /
devtmpfs 4071528 0 4071528 0% /dev
tmpfs 4076456 0 4076456 0% /dev/shm
tmpfs 4076456 528 4075928 1% /run
tmpfs 4076456 0 4076456 0% /sys/fs/cgroup
tmpfs 4076456 4 4076452 1% /tmp
/dev/mapper/VolGroup-lv_storage 219033444 189793280 18090852 92% /storage
/dev/sda2 487652 63243 394713 14% /boot
I should have two 250 Gbyte drives showing, but there’s only the right amount of space for one. The other space is presumably on /dev/sdb, but it’s not mounted.
But if I enter the command uname -a it returns
Linux vortexbox 3.12.5-301.fc20.i686+PAE #1 SMP Mon Dec 16 18:42:48 EST 2013 i686 i686 i386 GNU/Linux
From that link, Vortexbox 2.3 should be running linux kernel 3.12.5, and the previous version, which is what is on the old drive, should be running 3.6.5. That would suggest the new version is booting, but it’s the old one’s storage that’s showing up.
The only thing I can really answer is that I have transplanted a Linux hard drive from one broken box to a working one, and it might take a few minutes to adjust, but it does boot.
Can you disconnect the new drive, and boot up and see if your suspicions are confirmed ?
I run OpenSuse and use the command line very rarely, usually for stuff that no GUI could do, like copying a directory structure without the contents; so this may not be much help.
For any mounting / adding drives I generally start in the GUI Partitioner, or any other partition manager in your situation; from there in System Settings>Hardware>Removable Devices ( in KDE ), check Enable Automatic Mounting, and it’s daughter boxes, such as Mount all removable media at login etc…
Lately it doesn’t happen that any drive fails, I mean for years, except when a hardware problem means a drive wants a good fsck.
Partitioner will show whether drives are actually seen before mounting. Which is a starting point. And one can pass the fstab instructions there on the mounting options by clicking edit bn the drive ( such as /dev/sda1 ) rather than writing directly to fstab, which always makes me uneasy.
[ Speaking of uneasy, I have no idea why there’s still no proper help system when a drive crashes and one is faced with a blinking cursor for commands. Just telling the proper syntax for fsck would do wonders for learners who can’t get on the internet because the fscking computer has crashed… ]
That would also suggest that the storage from sda is not being mounted then. And that’s consistent with the mount failure messages you originally posted, which complained about sdb3 being already mounted. Assuming that both disks are partitioned similarly, what happens if you try mounting /dev/sda3 instead of /dev/sdb3 when both disks are connected? If that works, you should have access to the storage areas on both drives, sdb3 on /storage, and sda3 wherever you manually mount it.
If you want to use both disks long-term, and want to change what’s being mounted on /storage, my best guess is that you need to take a look at the partition labeling. It may be that sda3 and sdb3 have the same labels, and somehow sdb3 is getting picked over sda3. I confess that that’s pretty speculative though.
Yeah, that’s pretty much correct. I’ve gotten a little further along in understanding what my issue is, which isn’t what I originally thought it was. With both drives installed I’ve got two identically named logical volume groups, one on each hard drive. I’m able to get to the copy of /dev/mapper/VolGroup-lv_storage on /dev/sdb3 through /storage. The other copy of /dev/mapper/VolGroup-lv_storage, on /dev/sda6, has over 200 GB of free space that’s inaccessible.
ETA: I’ve found pages on renaming logical volumes, like this one, but I don’t really understand them, or how to apply them to my situation.
I was able to resolve this following the first part of the instructions at How to rename the root volume group. I disconnected the new hard drive, so I wouild only see the volume group I wanted to change, and then booted using a Linux Mint Live CD.
I set the root password using
sudo su
passwd root
<enter dummy password twice>
(but I think I could have just run from root after the sudo su command. I couldn’t run directly, because lvm asked for the root password, and there originally isn’t one.)
i could then do the following from the link:
Shut down, reattach the new drive, then boot and modify /etc/fstab to mount the now non-shadowed partitions. I’m now seeing all the space.