Multicast and FTP are not things that really work in the same breath. FTP is a peer to peer protocol, and runs over TCP. Each transfer requires two connections, and commands and data are exchanged over TCP. None of this works with Multicast.
Multicast allows you to send the packets out and have them received by a set of recipients. But there is no component that has the recipients sending anything back to you (part from layered reliable multicast where acknowledge packets may be used.)
The OP’s scenario can be effected in two ways really. Either the central machine initiates an FTP connection with each device, and pulls the needed files, or the devices initiate a connection and put the files onto the central machine. In the former case the central machine will initiate connections on a one by one basis - but can have the requests run in parallel. There is nothing stopping you having multiple ftp sessions running at once. In the latter case he devices could run on a timer, and independently initiate a connection to the central machine, or you could have the central machine send out a request to each device to initiate a send back session. There is no pre-canned multicast service that would allow you to do this, but it is not hard to work out how to create one. You just need a listener on a known multicast receptive port to see the request and to then initiate the transfer process. The transfer proper would run over TCP.
As I noted earlier, there are however useful utilities designed for managing compute clusters that provide parallel execution across a set of machines of commands. They don’t multicast, but you get the basic effect. (They also provide for reliable execution, something you won’t get with a simple multicast.) They include parallel distributed file transfer. For instance : https://www.linux.com/news/parallel-ssh-execution-and-single-shell-control-them-all