4

PyFilesystem (fs on pip) is a great library that supports in-memory filesystem creation with Python. However, I am looking to create and maintain a filesystem in Python in one process and dynamically access that filesystem in Python in another process.

Here is the barebones docs for the MemoryFS class but it doesn't appear to be usable like that. It can open from a "path", but that path does not mean the same thing in two different processes. It appears they are (understandably) completely sandboxed.

Is this possible in PyFS? If not, is there an alternative way in Python? If not, is there a similar cross-platform solution for a ram-disk that would function in this way?

3
  • Welcome to StackOverflow. Please read and follow the posting guidelines in the help documentation, as suggested when you created this account. On topic and how to ask apply here. StackOverflow is not a design, coding, research, or tutorial service. Commented Jun 29, 2018 at 22:46
  • 1
    Excuse me? I don't think I did anything that went against the guidelines. I already suggested a solution that did not work, and I am looking for an alternative. Commented Jun 29, 2018 at 22:50
  • @MatthewMage Would a simple ramdisk work for you? Commented Jan 31, 2020 at 12:13

1 Answer 1

1

The original PyFilesystem had tools to do just that. You could expose a filesystem via xmlrpc for example, and connect to it via an FS object.

PyFilesytem2 doesn't have such functionality. Although v2 has been designed to make implementing 'remote filesystems' much easier.

I'm not sure what your use case is, but you could store your data on an ftp server or Amazon S3. Both of which are supported by PyFilesystem. Any particular reason to want an in-memory solution?

The PyFilesystem mailing list me be a better place to brainstorm about such things.

Sign up to request clarification or add additional context in comments.

6 Comments

Any plans on reimplementing that functionality? We are looking to have a MemoryFS maintained by one process, which is where downloaded files are stored, and have that be available to be accessed by others. We don't want to be redownloading the data from a server every time the access process is run. So maintaining the data on FTP or S3 is already what we are doing in a way.
We also would like to prevent touching disk, if possible. So unfortunately your proposed solution does not fit our problem set. Thanks for the comment though!
No immediate plans, but that's not to say it will never happen. From the sound of it, you might want to consider a caching proxy, or roll your own caching with memcached or Redis.
Re "Any particular reason to want an in-memory solution?", I am seeking a way to share memory among independent processes, or at least share files. I can do this now with the OS's file system. But the processes have to poll the directory for changes, resulting in performance tradeoffs. If we could have a mountable memory fs that could be shared among processes like physical file systems are, this would enable what I would like to do.
@WillMcGugan Still the case that there are no plans? :(
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.