Skip to main content
Stack Overflow for Teams is now Stack Internal: See how we’re powering the human intelligence layer of enterprise AI. Read more >
Filter by
Sorted by
Tagged with
0 votes
0 answers
29 views

I am developing a project using flask, flask-socketio, and the Multiprocessing python libraries. My site runs on localhost (127.0.0.1:5000). The goal is to update the html template every second from ...
Wery848's user avatar
1 vote
0 answers
77 views

I'm experiencing an issue where the parent Python process terminates unexpectedly when debugging, but only when a child process is sent SIGTERM. The same code works perfectly when run normally (...
Skyman2413's user avatar
3 votes
1 answer
132 views

I try to use a custom multiprocessing manager, mostly following the example from the docs. The main difference is that my class updates internal state. It looks like this: class IdIndex: def ...
Achim's user avatar
  • 15.7k
3 votes
1 answer
56 views

I am using pythons multiprocessing library to create some subprocesses. I implemented a way to terminate my child-processes gracefully. To terminate them even if my main-process has a crash, I made ...
pcsh4814's user avatar
0 votes
1 answer
44 views

I’m writing a Python web crawler, and I’m using multithreading to execute a function. The function returns a link when it finishes. Here’s the general structure of the function: def download_link(...
Clyde Cole's user avatar
2 votes
2 answers
74 views

The existing examples of the multiprocessing-module assume, the tasks are independent, but my situation is "hairier". I need to process several hundred thousands of widgets. These are ...
Mikhail T.'s user avatar
  • 4,256
0 votes
1 answer
28 views

I'm writing code in Python 2.7 (bigger framework compatibility reasons) that does the following: Takes an ID_something.py file, opens it, reads the lines from it, then closes the file. Creates an ...
Artur's user avatar
  • 13
1 vote
0 answers
50 views

I am trying to implement a stop flag that gracefully prevents new parallel jobs from being started. Specifically, I am running a large number of simulations that each takes a few to many hours; in ...
barceloco's user avatar
  • 468
1 vote
0 answers
34 views

Consider this sample: import multiprocessing as mp import time class CycleThatDoesntStop: shouldrun = True def run(self): while self.shouldrun: print("im alive", self....
user30252103's user avatar
0 votes
1 answer
79 views

I have a cluster of compute nodes, each node with many CPUs. I want them to execute commands located in a file, one command per line. The drive where the command file is located is mounted on all ...
Botond's user avatar
  • 2,842
2 votes
1 answer
55 views

In short, I'm trying to run a python script from another python script but in the background, allowing the parent script to continue executing, and then to pass a CTRL_C_EVENT to the child script to ...
narab's user avatar
  • 29
0 votes
1 answer
73 views

I am trying to measure the processing time, or CPU time, of a CPU-intensive computation that has been parallelized with multiprocessing. However, simply bookending the parallelization of the ...
SapereAude's user avatar
1 vote
1 answer
79 views

I am trying to use the multiprocessing module to parallelize a CPU-intensive piece code over multiple cores. This module looks terrific in several respects, but when I try to pass lists and ...
SapereAude's user avatar
3 votes
1 answer
186 views

I have a question considering sharing gpu-tensors between processes using the torch.multiprocessing module. Here is a minimal example: import torch import torch.multiprocessing as mp from torch.nn ...
Daniel's user avatar
  • 31
1 vote
1 answer
64 views

I have a problem when i register a class at runtime on my BaseManager. It's pretty obvious that static analysis cannot tell what that function is, but how can I avoid it? import multiprocessing....
Gabriele Passoni's user avatar
0 votes
0 answers
86 views

I have a ML problem where I want to leverage the power of Support Vector Classifiers (SVC) or any other 2-class classifier and compare them to my NN models. The probelm is, that binary classifiers are ...
user30013477's user avatar
0 votes
1 answer
55 views

It appears that logging.handlers.QueueHandler/.QueueListener is breaking the .setLevel of an attached logging.FileHandler in Python 3.12.9 on Windows. Running the following minimal example results in ...
feetwet's user avatar
  • 3,507
0 votes
0 answers
32 views

I am calling a machine learning model for a dataset that I have loaded using torch DataLoader: class FilesDataset(): def __init__(self, path): file_paths = glob.glob(os.path.join(path, "*....
Iva's user avatar
  • 367
1 vote
0 answers
55 views

I've got some code that, for reasons not germane to the problem at hand: Must write very large log messages Must write them from multiple multiprocessing worker processes Must not interleave the logs ...
ShadowRanger's user avatar
0 votes
1 answer
83 views

I have a simple code packages in a module/folder "src" File sample.py import multiprocessing def f(x): return x*x def big_run(n): with multiprocessing.Pool(5) as p: p.map(f,...
Sébastien Eskenazi's user avatar
2 votes
3 answers
122 views

Question: Does scipy.optimize have minimizing functions that can divide their workload among multiple processes to save time? If so, where can I find the documentation? I've looked a fair amount ...
trevdawg122's user avatar
0 votes
1 answer
79 views

This code (it will only work in Linux) makes a 100MB numpy array and then runs imap_unordered where it in fact does no computation. It runs slowly and consistently. It outputs a . each time the square ...
Simd's user avatar
  • 21.5k
0 votes
0 answers
69 views

I'm encountering an issue where my Python script using multiprocessing.Pool hangs during startup, but only when launched via the debugpy debugger/launcher integrated with VS Code on Windows. When run ...
STATTHINGY TRENTON's user avatar
1 vote
1 answer
175 views

Here is a code to demo my question: from multiprocessing import Process def worker(): print("Worker running") if __name__ == "__main__": p = Process(target=worker) p....
Gordon Hui's user avatar
2 votes
1 answer
75 views

Hi I'm observing a strange behaviour with python multiprocessing Queue object. My environment: OS: Windows 10 python: 3.13.1 but I observed the same with: OS: Windows 10 python: 3.12.7 and: OS: ...
Alberto B's user avatar
  • 642
0 votes
2 answers
96 views

I am learning multiprocessing in Python and am trying to incorporate a worker pool for managing downloads. I have deduced my queue issue down to something with OOP, but I don't know what it is. The ...
Tom Smith's user avatar
0 votes
0 answers
103 views

I am using the pathos multiprocessing to parallelize a gradient calculation that is embarrassingly parallel, using finite difference. Below is a high level example of how it is set up, ` ... class ...
der's user avatar
  • 1
0 votes
0 answers
21 views

I have almost the same problem as the person in this post: python ProcessPoolExecutor do not work when in function. However in my code the funtion I am trying to call is an imported function located ...
Pwohlucky's user avatar
0 votes
0 answers
46 views

I have issues regarding my own implementation of a parellized database using the TinyDB and multiprocessing libs in python. It always give errors, such as this one: "c:\Users\1765536\AppData\...
Lac33 Lac33's user avatar
1 vote
1 answer
63 views

I'm running an experiment on copy-on-write mechanism on Python Multiprocessing. I created a large file of 10GB and load the file into large_object in main. file_path = 'dummy_large_file.bin' try: ...
LongTran's user avatar
0 votes
1 answer
82 views

Here's a batch prediction case using multiprocessing. Steps: After with mp.Pool(processes=num_processes) as pool, there's a with Dataset(dataset_code) as data in the main process using websocket to ...
Jason's user avatar
  • 37
1 vote
1 answer
44 views

Starting with one of the two examples from the User Guide ( https://aiomultiprocess.omnilib.dev/en/latest/guide.html ) I started my testing with an own variation: import asyncio import psutil import ...
LudgerH's user avatar
  • 103
1 vote
0 answers
40 views

I am trying to push logs for a multiprocessing job into ECS S3. Following is my code snippet: logger.py import logging from S3_log_handler import S3LogHandler def setup_logger(): # Set up the ...
Rupal's user avatar
  • 79
2 votes
1 answer
437 views

Why does the code shown below either finish normally or hang depending on which lines are commented/uncommented, as described in the table below? Summary of table: if I initialise sufficiently large ...
Jake Levi's user avatar
  • 1,960
2 votes
1 answer
103 views

I have a script that's trying to analyse some images in parellel. For some reason, I get intermittent crashes if the func passed to the pool returns variable sized data. This is only if I try to exit ...
Babar Shariff's user avatar
-4 votes
2 answers
90 views

I want to compare the effect of multiprocessing for bubble sort. Let us consider first the original one without multiprocessing: import multiprocessing import random import time import numpy as np ...
dato datuashvili's user avatar
0 votes
1 answer
324 views

Hi I am trying to create a program that when run, two windows open at the same time, from the same app. For this multihtreading is needed, but it seems I get some strange errors: XIO: fatal IO error ...
George Răbuș's user avatar
0 votes
2 answers
82 views

Consider the following minimal setup: /mymodule ├── __init__.py ├── main.py └── worker.py __init__.py is empty main.py: import sys import logging import multiprocessing from test.worker import ...
Art Gertner's user avatar
0 votes
0 answers
77 views

I have a flask application delivered by gunicorn that spawns multiple threads and processes from itself during the request. The problem is that when using the standard app.logger, some of the children ...
Tony's user avatar
  • 772
2 votes
2 answers
130 views

The python docs seem to suggest that it's required to call both close() and terminate() when using multiprocessing.pool.Pool without a context manager. Warning - multiprocessing.pool objects have ...
MadMarch's user avatar
0 votes
2 answers
89 views

We're working on a concurrency problem where a single "Sample" object has multiple dependent tasks that must be executed in stages. For instance, stage 1 has tasks (1a, 1b), stage 2 has ...
NanoNerd's user avatar
  • 142
2 votes
1 answer
91 views

I'm trying to implement a multiprocessing version of object detection (video source can be both camera or video) with YOLO model of ultralytics. I implemented a Queue where to add frames and a process ...
Simone Carlesi's user avatar
0 votes
0 answers
39 views

I am working with openalex data, which are multiple gz files. I need to translate json-like data into csv format. For example: one line of the original file is like: {"id": "https://...
Liang Shuyuan's user avatar
2 votes
2 answers
317 views

I have some working code that I need to improve the run time on dramatically and I am pretty lost. Essentially, I will get zip folders containing tens of thousands of json files, each containing ...
user2731076's user avatar
0 votes
0 answers
102 views

I have a very specific issue with the combination of multiprocessing + azure authentication and could not find a valuable open issue, so here I go: I am working on a python program that uses ...
marting's user avatar
  • 503
0 votes
3 answers
60 views

import multiprocessing import time def task_function(x): print(f"Processing {x} in process {multiprocessing.current_process().name}") time.sleep(1) def main(): data = [1, 2, 3, ...
hrdom's user avatar
  • 168
0 votes
0 answers
107 views

I'm trying to use Selenium to scrap and do some actions on a list of URLs. The list itself contains some thousands of URLs. So, I'm trying to use a pool of processes to manage multiple instances of ...
ILP's user avatar
  • 51
1 vote
0 answers
61 views

I'm making a script that calls the same function several times with different parameters, then save the results in a spreadsheet (excel). To do this, I'm using multiprocessing.Pool, and I need to save ...
Pernoctador's user avatar
0 votes
1 answer
181 views

The following function creates an instance of a class that can be shared between processes: def create_shared_object (constructor: ?) -> ?: from multiprocessing.managers import BaseManager; ...
isa-2d's user avatar
  • 31
0 votes
0 answers
66 views

Because of some special reasons I want to use spawn method to create worker in DataLoader of Pytorch, this is demo: import torch import torch.nn as nn import torch.optim as optim from torch.utils.data ...
forestbat's user avatar
  • 1,115

1
2 3 4 5
99