I like this. It sounds like the right way for a company to keep information they say is important.
An alternative approach is to assign the responsibility to an individual employee with no IT training, whose future is already secured and who has a foot out the door, who tries it by asking for advice on internet forums.
I’ve been running robucopy – wait, sorry, robocopy – for the last 15 hours. I have 10 top level folders and decided to do those one at a time rather than doing the whole job at once. It’s chugging away. Meanwhile, a forced update advisory has appeared on screen. So, I’m curious whether the running job or the forced update will wind up winning.
WHICH BRINGS TO MIND A QUESTION: if the job gets interrupted, say by the laptop being forced to do an update, how do I finish it? Will it restart itself, or do I resubmit exactly the same command, or something else?
Since the OP has been addressed, I’ll opine that it’s appalling for an IT department to wash their hands of a request like this. Copying that amount of data using command-line tools isn’t, with the greatest of respect to @Napier, something that should be left to a user.
This is a sentence I glossed over when I originally read the first post.
So a question to the OP: what do you think would happen if, on the morning of your final day, you tell your supervisor that this copying task didn’t get done? Or that you tried every trick you knew, but couldn’t figure out how to complete it?
Could there be any ramifications? Anything that would affect your retirement?
Jeez, they might consider it a fireable offense! Oh, wait…
The only thing that would impact me is that people I care about there might need something of mine and be unable to get it. However, there’s a significant chance nobody will ever look at the stuff.
I do want to leave (and have not set a date yet), but if I’m messing around at home, looking in on this process, and getting paid to do it, that’s not too bad. It’s the dog and pony shows before hundreds of people I really dislike doing (I get mild stage fright for days!).
I have 6 top level folders. One tiny one I did as a test of robucopy, I mean robocop, I mean robocopy, yesterday. I started another, bigger one about 24 hours ago, and it’s currently working on the 64th second level folder out of 145. I guess it will finish tomorrow, roughly. I only work MTW, so I’m not losing anything if I set it running over the weekend. Currently I am guessing they will get what they said they wanted.
However, I think they’re a bit lucky in this. What they are doing isn’t due diligence.
Yeah, apparently I need to put a backslash before < or > characters or the message board doesn’t display what is between them. I meant to say <SHIFT>+<RIGHT CLICK>.
OP still has a problem here: what do I do if robocopy stops? How do I restart the process without trying to start over from the beginning?
It’s running very slowly now. I figure about 0.3% of a file per second, so, 5 minutes for each file. I don’t know why the speed would change. Based on my employer’s workday and time zone I think network activity in general would be fairly low at this time.
Wait, am I missing something obvious here? Solving the problem described in your OP is the reason OneDrive exists. In fact it’s the only reason it exists. Use OneDrive as intended, instructions here.
The way it’s supposed to work is you install the sync client on your PC, then add any needed login info, and choose the files you want to sync, and then they magically get synced to your OneDrive folder on your PC as bandwidth is available. You don’t have to babysit or intervene at all. If something gets corrupted, OneDrive handles it behind the scenes. Likewise when you add or edit files, they magically sync in the background as bandwidth is available.
Nobody’s ever supposed to access OneDrive files via a network share! That’s bypassing the whole reason it exists! Also, accessing it via URL is a stopgap, not intended to be the primary access method.
You do need the sync client installed, but if you’re on Windows 10 then it’s already installed. In fact it may already be configured, and you never knew because your IT department sounds like they’re too stupid to tell you basic and obvious things. In spite of that, it sounds like OneDrive is already part of the ecosystem, therefore it’s probably part of their limited vocabulary of grunts and clicks, so it’s probably one of the few things they can help you with. This is definitely worth your time, look into it.
Use ipconfig /all to determine the lease parameters. Some places actually have 365-day IP leases, but usually it’s 8 days, sometimes (shortage of IP’s?) it might be 2 days. A properly set up DHCP (!!) should ping the address before handing it out to avoid a conflict; but then, we’re talking laptop so fixed address means turing that fixed address on when connected to the corporate network and off at home. plus, whle you are gone, someone will take that address; and fixed devices powered off (that printer) will be a problem. If the whole DHCP (sometimes 2+ servers) is rebooted, the database could disappear and every non-fixed gets reallocated. You could make it work despite IT, but likely only one session at a time, not necessarily permanently. Then we get IPv6, a whole 'nother kettle of monkeys.
As always the devil is in the details.
(Handy hint, to reset DHCP in the event of IP conflict, unplug the network, count to 10, plug back in. DHCP will redo the handshake. But if DHCP doesn’t check for conflict first, then you may still get the address somone else has improperly hard-coded.)
I can see the problem…
OK, we (IT) back stuff up. But then it gets coorrupted or erased. So now, our latest backups don’t include those files. What should we be saving, when corporate data starts running into the terabyte range?
Back in the stone age, we recommended a process (with tape) daily backups kept for a week or two, weekly backups kept for a mont, monthly backups kept for 12 months, year-end backups (including accounting data closed off for the year) kept forever. That meant a lot of tapes, but tapes were cheap(ish) compared to disks. Daily backups might be a diferential from the weekly. The flaw obviously is something created and deleted in a week is not on the weekly/monthly/annual. Should a backup that simply keeps running images of the server keep files deleted off the server? (or renamed (so we have 2?)
Of course, these are the details IT should solve/decide, and communicate to those who need to recover files from time to time. Today, what’s the backup process, and what are you keeping?
Plus, one of the issues is that the purpose of servers is that local storage on the individual PC’s is not backed up - just the server. The presence of a laptop is not guaranteed at backup time, etc. etc. Plus, do we need all Bob’s photos of his daughter’s school recital and his vacation, even if they make nice screen savers?
Personally, I bought (several) terabyte-sized USB disks, and every so often copy my Documents, Photos, Pictures, Email offline folders, and assorted other data (iTunes\Music and ebooks, for example). I use usually just file explorer, and this tells me when a file is uncopyable, so I don’t really have any bad files that I’ve found. Unless it’s new, I have that backup copy. It depends - photos, just add to an existing store, and do a full copy once in a while. Documents, simpler to copy the entire directory. Something I can start going, then go have a coffee, read a (e)book, etc.
BTW - this is what I found, to answer the “restartable” question:
By default, Robocopy skips copying existing files if the specific metadata of the files match then those files will be skipped from the “file” copy operation. XCOPY also has a restratable option, but I think that applies to individual big files.
You might also consider using the command ATTRIB filespec +A to set the archive bit on original files, then robocopying only files with the archive bit set… Any that fail will still be set? (Why don’t I trust Microsoft??) Testing required. Also, you can list those afterwards by using the option to list only and see what you find.
Any DOS commands, followed by /? shows the options, and Google will tell you more detail if the option could be useful.
I think that’s what I am doing already, more or less.
I originally had ALL my work files in the folder C:\a. When Windows came along, there were incompatibility problems with moving them to the “user” folder that appears transparent to the user, with Documents, Pictures, Downloads, et cetera, folders inside it, and I never used that system. I just stuck with C:\a.
When OneDrive came along, with IT help, I identified all of C:\a’s content, and nothing else, as what is to be maintained. Now there appears in File Explorer an entity called “OneDrive - Napier’s Employer, Inc.” which contains a, Documents, Pictures, Downloads, et cetera, and the only part of this I have ever used is the “a” part. If I copy the path from Explorer it resolves to "\users\Napier\OneDrive - Napier’s Employer, Inc."
However, because this path is so long and violates so many file naming rules of various software I use, I can’t really use this form of path all the time. And it’s not like I have loads of files on my PC and identify some of them for OneDrive to sync, it’s more the other way around. OneDrive is where ALL my files are, and I have only specified a few of them for which OneDrive is to keep a local copy. For some time now I have not had enough available room on the local HD to keep a local copy of everything.
So, what I did was to create an alias, A:, which File Explorer shows as a network drive. A: shows as containing all my files. The "A:" path and the "\users\Napier\OneDrive - Napier’s Employer, Inc." path both lead to the same files, or, at least, doing anything to a file using one of these takes effect on the file pointed to by the other. I don’t know the mechanics behind the scenes and don’t want to. They told me OneDrive had to have control of all my user files, and it does.
I do remember there were different ways to create aliases and that they would have given different results. At one point I asked for IT help and they worked on it for a while and delivered a result, and it almost immediately broke. So based on my own reading, I worked out another result, and as well as I can tell it has worked fine for a few years now.
I DO THINK that IT has control of all my files already, and the project I’m working on right now is to copy all of this stuff from one filesystem (which IT exclusively controls) to another filesystem (which IT also controls). For all I know, they’re different pointers inside the same huge filesystem – but that’s more than I can see. This project does feel kind of stupid, but the company has told me what the company needs from me, and I’m trying.
OK, it sounds like you are using OneDrive as intended, but your manager has asked you to copy files out of that drive to yet another networked drive, using your home network connection. I overlooked this earlier because it sounds very silly.
If it were me, I’d approach IT and simply ask them to copy the files. They presumably have access to both locations, they probably have higher-bandwidth network connectivity than you, they probably have specialized tooling for this, and it’s a very reasonable ask.
If IT won’t help then I’d talk to management and let them know that the data set is too large to be copied practically, and everything’s backed up in OneDrive, so would they be willing to accept this as the task completion? If not, can they pull some strings with IT to get them to comply?
It’s all well and good to find alternate solutions to do this yourself, but it’s really not reasonable to expect a remote-working user to sync 400GB of data between two remote data stores.
I agree that this has reached the point where you’ve gone above and beyond your duty to do IT’s job for them. If they want the data copied, they can copy it themselves. If you need justification, explain the steps you’ve taken, why it isn’t working well, and tell them that if they want you to keep working on this, to tell you EXACTLY what steps they want you to take to accomplish it.
Right! Well, my manager and IT, who created the folder into which I’m supposed to do the copying.
About using my home network connection: am I moving data from one network store, over my connection to my remote laptop, then again from my laptop to another network store? I had hoped that, what with these being computers and all, it would take the command from my remote laptop, and executing it with whatever pipeline IT intended.
It’s hard to say without knowing the setup, but I’d say it’s most likely the two backends don’t know about each other, and your PC is mediating this entire interaction.
OP asking again. I’m getting “The network path was not found” errors. If I do try to keep doing this myself I have to start over. And there’s a forced update waiting, too. So I should reboot.
Do I just submit the same robocopy command, and will it start where it left off?