I think you need to think of it more as sucking out all but the one line, and pushing that into a new file…you will use grep to locate the culprit line…
You could also do this with sed or awk, if available on your system, but I don’t remember the exact commands. If you’re going to be doing a lot with UNIX, those will be very useful tools to know.
would be one way. sed is often my tool of choice for ad-hoc
file manipulation. I thought I’d let this pass since the OP solved
his problem, but since you brought it up …
Yes, but that (grep) would only work if that line alone matches some specific pattern. In a file that size, it seems quite possible that you’d have duplicate entries, so at the least you’d want to do a straight grep to stdout first, to check if the line is unique.
That’s true, Chronos, though I am assuming that this is a data file of some sort with a fairly predictable format, or that a specific enough regex could be composed to isolate a single line.