What is UNIX?

Bill H.: If you need to manipulate a long path more than twice (my general rule), stash it into a shell variable and expand it with double-quotes.

mydir=/home/me/mydos/dosemu/freedos/
mv *.exe “$mydir/bin” && cp *.doc “$mydir/doc”
cd “$mydir”

Use of variables and looping constructs (for, until, while, etc.) can greatly reduce your finger strain.

And you do have a dir stack in bash and zsh. Look up pushd and popd. Here, for example.

And as for partitons: Nothing makes your partiton your drive except good sense. If you want everything on a single partition, go right ahead. I don’t know of a single thing you’ll break, except maybe your whole system if you aren’t in the habit of cleaning out /tmp often enough.

For example, my 120 GB drive is all one partition. It’s handled by one single ext3 filesystem and it’s doing just fine.

Besides, you only have to worry about partitions two times: When you install a new OS and have to make room, and if your computer ever needs major fixing to retrieve hard-to-get data. I think both are rare enough that it isn’t worth worrying about.

You could just mount your floppy as “/a”:

cd /usr/local/myapp/mydir1/mydir2 cp test /a/

Bill H. and Derleth: Windows works perfectly well with multiple partitions. It’s been a personal “must do” for me ever since Windows 95 and the monthly format-and-install. Starting with 98 SE or something Windows even had the ability to move the home folder “officially” in the GUI so that clicking “My Documents” or whatever would take you to the desired spot. In prior versions a registry change would do the same thing.

Since I’ve been playing with Linux a heck of a lot lately, I’ve been doing the same thing: pointing /Users to a dedicated partition. It’s not really a hassle or pain in the neck at all.

Even when constant reformats weren’t a pain in the neck, I’ve always been in the same habit. Starting with my 3d Mac (a Quadra 636) I started partitioning the drive. Of course being a Mac, it was always super easy. The organizational benefits were great, especially one-click desktop access to my “Documents” drive. (I’ve never been the type to litter my desktop or give create names to my drives.)

The first thing I did starting with the Mac OS X public beta was figure out how to force the OS to recognize my “Users” drive as “/home” (actually it’s “/users” on Mac OS X). With the early releases fstab wasn’t used so you had to hack the automounter and get into the startup scripts and manually set your mountpoints. Jaguar (10.2) now just uses the simple fstab file.

All in all, partitioning is useful for organization, emergencies, reinstalls, upgrades, and defragmenting purposes.

Oh, the IT guys might not like it, but I even went through to trouble to partition my work machine without destroying the precious “standard load.” Hmmm… I could install Linux on the extra partition… (yeah, I know, I won’t).

Just so everyone’s on the same page, Windows 9x and NT are NOT the same OS, even though they look alike. 9x (and Me) are DOS-based shells, while NT (and 2000 and XP) are much closer to FreeBSD. NT has always been a multi-user OS.

Rex: No, NT is not espeically close to FreeBSD. (MacOS X is based around Darwin, which is a FreeBSD clone. Maybe that’s what you mean.) NT is a POSIX OS, and therefore marginally closer to UNIX than DOS is, but it isn’t a UNIX system.

The only time Microsoft created a UNIX OS was when it created Xenix.

My favorite quote about the Unix philosophy (don’t recall the source, but it was one of the gods): “Unix doesn’t stop you from doing really stupid things, because that would also stop you from doing really smart things.”. Which means that if you know how, you can get a lot more milage out of Unix than out of, say, Microsoft products. People working in the various sciences have a larger proportion of people who know how, and we’re always glad to get that extra edge, so Unix is prevalent in scientific work.

Another factor is that user interfaces are generally more difficult in a Unix environment (yes, this is due primarily to market forces), but programming is easier, thanks to the modular design and programs that talk to other programs. If I write up a code to do some numerical integrals I need, I’m going to be the only person who uses that code. So I’m not going to worry about giving it a pretty user interface, or about “what if the user does this”. But I am going to worry about how easily I can get the coding right. Unix is better suited to this sort of work than Windows.

OK, if pithy quotes are fair game, here’s one of my faves: “Unix is user-friendly - it’s just particular about who it’s friends are.”

Aaargh! Possessive “its” with an apostrophe! Kill me now.

If you’re interested in seeing where Unix originally came from, and how the different “flavors” evolved, this family tree is really interesting.

Derleth, I do know all about popd and pushd (and dirs and “cd -”), but it ain’t the same. If you’ve ever used 4dos (which for me is saying “if you’ve ever used Windows”), you’ll know what I mean. It saves your last 20 directories you’ve been in, and you can select one by typing "cd ", and hitting the page-up key. You get a list, which you can pick from with the arrow keys. (you can use this capability anywhere, not just with “cd”, so mid-line you can choose several recent directories very quickly). pushd and popd do provide a stack, but it’s just ugly and for me not really usable. However, one could write a relatively simple shell script to build what I’m after, and someday I hope to.

I like your idea about putting dirs in variables real-time. I think I’ll play a bit with that. Also, I just discovered there’s a variable set by bash to your last directory (OLDPWD), which enables:
cd /usr/local/bin
cp $OLDPWD/goofy .
etc.

I use a ton of looping stuff real-time, so I’m with you there. I wouldn’t say I create a lot of variables on the fly, though. Really only in scripting.

I too have a 120G hd as a single partition. But that wasn’t the one I was referring to. To the best of my knowledge, unix requires at least two partitions (I believe a seperate /boot is mandatory), and recommends several beyond that. Yeah, it’s all for a good reason. And yeah, it creates an overflow protection that Windows doesn’t have. But in reality, I’ve never actually derived any value from it, i.e. had a partition overflow. So for my money it’s needless aggravation.

Mr2001, that is true, but what I’m interested in really isn’t an easy way to access the floppy, but rather a way to deal with several long directory paths simultaneously. Dos has a way to create multiple roots (effectively), where unix is limited to exactly one root.

In my autoexec, I had:
subst m: “c:\documents and settings\administrator\my documents”
(along with others)

Thereafter, I could say
copy goofy m:personal\goofy
(or other)
Anyway, my point in all this was that multiple roots as required in Dos and denied in Unix isn’t all bad, and in some ways it’s better.

Balthisar, I think you may have misread; I’m in total agreement, and in fact I do that as well on windows, to keep a data partition seperate for those times you want to rebuild the machine.

This is such a total hijack, but I didn’t really feel like posting it elsewhere, especially as MPSIMS is the only relevant place and the eyes to this previous conversation here would likely never see it.

Anyway, I scripted the cd replacement we talked about above (a decent pushd replacement), and here it is. It keeps a stack of where you’ve been, allows you to do “cd 3” to change to the 3rd in your stack. “cd 0” lists the current stack. “cd -3” removes the third entry. “cd” and “cd -” do what you’d expect.

It also sets up variables, $D1, D2, etc. where $D1 is the first entry, etc. So you can say “cp $D3/filename $D2”.



function cd()
{
        if [ $# -eq 0 ]; then
                my_cd $HOME
                return 0
        fi

        case $1 in 
                0)
                        # List directories on stack
                        dirs -v | grep -v "^ 0"         # 0 is current directory
                        ;;

                [1-9] | [1-9][0-9])
                        # Change to one of the stacked directories
                        DIR="${DIRSTACK[$1]}"
                        my_cd "${DIR/~/$HOME}"          # pushd doesn't handle ~
                        ;;

                -[1-9] | -[1-9][0-9])
                        # Remove entry
                        NAME=D$(( ${#DIRSTACK[@]} - 1 ))
                        unset $NAME
                        popd +${1/-/} >/dev/null
                        my_cd_set_vars
                        ;;
                -)
                        # Change to the last directory
                        DIR="${DIRSTACK[1]}"
                        my_cd "${DIR/~/$HOME}"
                        ;;

                *)
                        # Change to a user-specified directory
                        my_cd "$1"
                        ;;
        esac
        return 0
}

function my_cd()
{
        pushd "$1" > /dev/null

        # remove duplicates
        for (( LOOP=1; $LOOP<${#DIRSTACK[@]}; LOOP=$LOOP+1 )); do
                if [ "${DIRSTACK[0]}" == "${DIRSTACK[$LOOP]}" ]; then
                        popd +$LOOP >/dev/null
                fi
        done

        my_cd_set_vars
}

function my_cd_set_vars()
{

        # Setup $D1,$D2 variables as stacked directories; print out stack
        for (( LOOP=1; $LOOP<${#DIRSTACK[@]}; LOOP=$LOOP+1 )); do
                NAME=D$LOOP
                export $NAME="${DIRSTACK[$LOOP]}"
                echo -n "$LOOP ${DIRSTACK[$LOOP]}  "
        done
        echo
}


MacOS X does not encourage at all using the root account. Root is hidden by the system and Apple has made it rather hard to find and access that account.

Apple’s tech support specifically discourages the use of root for anything.

Bill H just demonstrated another benefit for Unix/Linux users - the code is free, and anyone who wants it is free to have it. Anyone who sees a better way to solve a problem can write what they want, include it into their OS, and bingo! your computer does what you want it to do!

Of course, the entire OS is much, much more than the script above, but it demonstrates the flexibility and costomizability (is that a word?) of Linux and Unix. For many people, that’s a HUGE reason to use *nix rather than MS.

Me? I use it cuz my SO is smart enough to install it, and from day-to-day, there’s really no difference in feel to me.

If you just want to save keystrokes, you could just set up variables in your .bashrc file:
m=/home/billh/documents
…etc…

then just use “cp goofy $m/personal”.

Alternately, you could use symbolic links in your home directory or the root directory to similar effect. After creating them, you would type, respectively, “cp goofy ~/m/personal” or “cp goofy /m/personal”.

As far as I can see, the only thing you’d be missing would be the per-drive current directory state that is provided by DOS.

Mr. Feely, all excellent ideas. But… bear in mind what you’ve just demonstrated is a means of creating effective multiple roots (or really ways of violating the “single directory structure” of unix). My point was debating friend Derleth’s contention that a single immovable directory structure was an advantage.

Maybe I’m missing a subtle point here, but doesn’t the fact that you can emulate multiple roots in Unix while still having a true single-rooted filesystem give you all the benefits of a multiple-rooted filesystem? (With the exception of per-root current directory state, of course, but that could always be faked.)

As far as the recommended practice of using multiple partitions on Unix, there’s nothing inherent in the design of Unix that says you can’t just use a single partition. (Even /boot is just a kludge to get around limitations in BIOS disk addressing.) IMHO, the capability of using multiple partitions just gives added flexibility.

Headcoat, (if you’re still reading after all the incomprehensible geek debates). I think there are a couple of things missing to your answer.

There are some fundamental differences: Unix was designed from the ground-up a little more than most flavors of MS Windows, which all tend to be agglomerations of new stuff onto old versions of Windows. This helps a lot for stability. While recent versions of Windows are becoming much more stable, they still don’t approach unix’s ability to keep running for months on end continously without a reboot.

As mentioned, multi-user and security are somewhat better integrated into unix (even if not part of the earliest concept of unix).

Another difference is that to some degree, unix was designed to give more control to the user than windows. This is considered a good thing by geeks and people who maintain servers and high-performance systems because they can fix and tweak things easier. This can be considered a bad thing by people who maintain systems for non-geeks, because it means the users can screw things up royally much easier than with Windows.

Also, unix has been around long enough to be well-tested. This is even more true for Linux, which hasn’t been around as long, but has thousands and thousands of people looking at the code and constantly fixing whatever bugs they can find.

And finally, this is not a fundamental difference, but Linux is much more secure simply because most viruses and worms are written to attack Windows. Not necessarily because Windows is more vulnerable, but because it so much more popular.