Is it possible to check if a farm would be created on a root file system, or would that be too transparent to the farmer process?
We’ve seen a few cases where, if the paths aren’t precisely correct in the farmer startup command, or if a drive isn’t mounted properly, the farm will wind up being created in the root file system, which will usually not have as much available space as the intended drive, with all the attendant problems of an OOS root.
If it is possible to validate that a farm is mounted on a device other than the root fs, then perhaps an “–allow-root” flag that defaults to no. That way, when a user is doing something like attempting to migrate farms from one OS to another, it will prevent that particular disaster from happening until he’s gotten his drives mounted properly on the new OS and farmer command pointing to them correctly. If they really do want a farm on their root filesystem, which should be pretty rare, then they’d have to enable it with --allow-root true.
If it’s not possible to check for that, then maybe some sort of “–validate-farms true/false” flag that will error out gracefully if it doesn’t find an existing plot everywhere that the farmer command says it should find it. (e.g. “No farm detected at path X, disable farm validation to create.”)
(I suppose it’s not as big a deal on windows, since each mount gets a different drive letter and if it’s not mounted properly then it won’t write the farm anywhere, but in linux, it can be a pretty big deal.)
Thinking on this further, it occurs to me that as things currently stand, a linux user can mitigate the possibility of this happening due to a simple mount failure by having the folders that get mounted to be owned by root when it’s not mounted, and with user ownership when it is mounted. That way if the drive failed to mount, the farm creation should fail due to permission issues. That said, there will still be people who run as root, and others who like to mount their drives where users also often have permissions (i.e., under home directory, or /media/user), so I still think the suggestion is a good idea that gives the user flexibility to safely mount however and with whatever permissions they like.
This is an interesting problem. I think the solution might be not to detect filesystems or anything like that, but rather to maybe have a flag like --create that defaults to true, but can be set to false to only work with already existing farms and when missing to refuse to start. This will also allow for a better error handling.
More advanced option would be to support specifying farm ID to handle reordering as well, but that is probably too much.
A --create flag would work (kinda just the flip side of the --validate-farms flag I suggested). Just a way to “confirm all farms exist, don’t run anything otherwise” so when doing things like migrating to a new OS, where people may still be learning how to mount drives properly, you have a way to confirm that you’ve done it correctly in the new OS and all expected farms are indeed found. A “safe mode” if you will.
I gave it a try. Eh. I’d say info would be useful when you know ahead of time that there’s a risk that drives or paths weren’t mounted properly, like when switching OS’s or switching hardware/disks around. Then, yes, you could set up a long info statement listing every farm (removing the “path=” and size parameters from each entry) to test all the paths.
But it would still leave open the possibility that a simple unexpectedly or accidentally unmounted drive could cause the issue and lead to a full root filesystem.
With something like a --create false or --validate-farms true flag left activated at all times on the farm executable script, disabled only when farms are intended to be created, you’d catch those. And it would also have the benefit of testing the paths as exactly specified on the regularly updated and maintained farmer executable script, rather than on a separate info script with a different format requiring independent updating and maintenance to be useful.
Thought I would add that we’ve come up with a viable solution to at least prevent OS drive from getting filled when a mount unexpectedly fails. Set the permissions on the mount point to be read only (555) while the drive is unmounted. Then if the farmer attempts to create a farm on the unmounted mount point, it will fail with permission denied.
Drive will only be filled if there is enough space on it, otherwise farm allocation should fail. But if someone is willing to contribute time, --create bool option would be good to add.
Come to think of it, is there a reason that available space isn’t checked for before trying to create plots? Such a check would’ve prevented this issue in the first place.
It will fail to allocate the space if it does try, but I agree it would have been a nicer experience to check it first, just makes implementation more difficult and no one invested time into it yet.
Give it a shot. It’s consistent on Linux, not sure on Windows. It’s why I raised the issue in the first place, as instantly filling the OS drive to capacity is rather dangerous behavior. Sorry, thought you knew.
Note: It will error out. It’s not like it tries to work with the farm that it creates. It’s just that after the error, you go look at the remaining space on the drive, and it’s 0.
Also note that it’s not just an issue when a mount has failed. Try to overallocate any drive, even one mounted properly. It will error out, but a farm will be created and remain on the disk using up all available space.
The solution I offered of setting the mount point to be read only while unmounted (permissions when unmounted are totally independent from the drive’s permissions, which are applied to the folder upon mounting) will prevent a failed mount from doing this to the OS drive.