26

Consider a typical function with default arguments:

def f(accuracy=1e-3, nstep=10):
    ...

This is compact and easy to understand. But what if we have another function g that will call f, and we want to pass on some arguments of g to f? A natural way of doing this is:

def g(accuracy=1e-3, nstep=10):
    f(accuracy, nstep)
    ...

The problem with this way of doing things is that the default values of the optional arguments get repeated. Usually when propagating default arguments like this, one wants the same default in the upper function (g) as in the lower function (f), and hence any time the default changes in f one needs to go through all the functions that call it and update the defaults of any of their arguments they would propagate to f.

Another way of doing this is to use a placeholder argument, and fill in its value inside the function:

def f(accuracy=None, nstep=None):
    if accuracy is None: accuracy = 1e-3
    if nstep is None: nstep=10
    ...
def g(accuracy=None, nstep=None):
    f(accuracy, nstep)
    ...

Now the calling function doesn't need to know what f's defaults are. But the f interface is now a bit more cumbersome, and less clear. This is the typical approach in languages without explicit default argument support, like fortran or javascript. But if one does everything this way in python, one is throwing away most of the language's default argument support.

Is there a better approach than these two? What is the standard, pythonic way of doing this?

4
  • I have asked myself this question so many times. But it might be a question for [programmers[(programmers.stackexchange.com)? I thought there was a comp-sci stack exchange but I can't find it now. I can't wait to see what kind of answers you get. Commented Apr 10, 2015 at 16:38
  • 3
    @MarkMikofski you are thinking of cs.stackexchange.com though that site deals more with the questions that need funky greek letters and like talking about Big O. This question though, I'd be on the edge for if it would best be asked on Stack Overflow or Programmers.SE. As it is here and doesn't appear to be gathering off topic votes, its probably ok here. Note that if it was to be migrated to P.SE, the answers wouldn't match the desired style of answers for that site. Commented Apr 10, 2015 at 16:44
  • 3
    @MarkMikofski on the other hand if, the question was one that is primarily opinion or too broad on Stack Overflow, it would likely fare equally on Programmers.SE. To that end, you may wish to read What goes on Programmers.SE? A guide for Stack Overflow which tries to help Stack Overflow users understand the scope of Programmers.SE and avoid poor migration suggestions. Commented Apr 10, 2015 at 16:47
  • thanks @MichaelT for others see Choosing between Stack Overflow and Programmers Stack Exchange, I especially like the > "Rule of thumb: if you're sitting in front of your IDE, ask it on Stack Overflow. If you're standing in front of a whiteboard, ask it on Programmers" Commented Apr 10, 2015 at 18:03

4 Answers 4

10

Define global constants:

ACCURACY = 1e-3
NSTEP = 10

def f(accuracy=ACCURACY, nstep=NSTEP):
    ...

def g(accuracy=ACCURACY, nstep=NSTEP):
    f(accuracy, nstep)

If f and g are defined in different modules, then you could make a constants.py module too:

ACCURACY = 1e-3
NSTEP = 10

and then define f with:

from constants import ACCURACY, NSTEP
def f(accuracy=ACCURACY, nstep=NSTEP):
    ...

and similarly for g.

Sign up to request clarification or add additional context in comments.

6 Comments

I was just about to add this as a comment. Yes, this is the technique I often use, it has many advantages, like only needing to update your defaults once, but it also has a (could be good or bad) look to it, and definately means more key strokes, although if all of your CONSTANTS are only 4 letters long or less then no more than the None option the OP originally suggests.
If f and g are defined in different modules (and if you have another function h calling g), then this would look like def g(accuracy=fmod.ACCURACY, nstep=fmod.NSTEP, and h(accyracy=gmod.fmod.ACCURACY, nstep=gmod.fmod.NSTEP), wouldn't it? Is there a convenient way to propagate these defaults around various modules?
@amaurea, import the constants from fmod.py in all other modules using from fmod import ACCURACY, NSTEP then you don't have to use the constants in the function protocols with the full namespace, you can just use @unutbu's answer, def g(accuracy=ACCURACY, nstep=NSTEP) and h(accuracy=ACCURACY, nstep=NSTEP)
Usually if I have a lot of these contstants, and they apply to every module in the package, then I sneak them in __init__.py or a module called constants.py. Then you can also set something like __all__ = ['ACCURACY', 'NSTEP'] and then in each module just use from mypackage import * to import all of your constants in each submodule.
@unutbu, I didn't read your update not sure if my answer is substantially different from yours, we could just merge them and I could remove mine.
|
6

I think that procedural paradigm narrows your vision to that problem. Here are some solutions I found using other Python features.

Object-oriented programming

You're calling f() and g() with same subset of parameters -- this is good hint that these parameters represent same entity. Why not to make it an object?

class FG:
    def __init__(self, accuracy=1e-3, nstep=10):
        self.accuracy = accuracy
        self.nstep = nstep

    def f(self):
        print ('f', self.accuracy, self.nstep)

    def g(self):
        self.f()
        print ('g', self.accuracy, self.nstep)

FG().f()
FG(1e-5).g()
FG(nstep=20).g()

Functional programming

You may convert f() into higher-order function -- i.e. something like this:

from functools import partial

def g(accuracy, nstep):
    print ('g', accuracy, nstep)

def f(accuracy=1e-3, nstep=10):
    g(accuracy, nstep)
    print ('f', accuracy, nstep)

def fg(func, accuracy=1e-3, nstep=10):
    return partial(func, accuracy=accuracy, nstep=nstep)

fg(g)()
fg(f, 2e-5)()
fg(f, nstep=32)()

But this is also a tricky approach -- f() and g() calls were swapped here. Probably there are better approaches to do that -- i.e. pipelines with callbacks, I'm not that good with FP :(

Dynamicness & introspection

This is much more complex approach, and it requires digging into CPython internals, but since CPython allows that, why not use it?

Here is a decorator to update default values through __defaults__ member:

class use_defaults:
    def __init__(self, deflt_func):
        self.deflt_func = deflt_func

    def __call__(self, func):
        defltargs = dict(zip(getargspec(self.deflt_func).args, 
                            getargspec(self.deflt_func).defaults))

        defaults = (list(func.__defaults__) 
                    if func.__defaults__ is not None 
                    else [])

        func_args = reversed(getargspec(func).args[:-len(defaults)])

        for func_arg in func_args:
            if func_arg not in defltargs:
                # Default arguments doesn't allow gaps, ignore rest
                break
            defaults.insert(0, defltargs[func_arg])

        # Update list of default arguments
        func.__defaults__ = tuple(defaults)

        return func

def f(accuracy=1e-3, nstep=10, b = 'bbb'):
    print ('f', accuracy, nstep, b)

@use_defaults(f)
def g(first, accuracy, nstep, a = 'aaa'):
    f(accuracy, nstep)
    print ('g', first, accuracy, nstep, a)

g(True)
g(False, 2e-5)
g(True, nstep=32)

This however, rules out keyword-only arguments which have separate __kwdefaults__, and probably blow up logic behind use_defaults decorator.

You may also add arguments in runtime by using wrapper, but that will probably reduce performance.

4 Comments

Your third suggestion was very interesting. I had not considered that something like that might be possible. Applying the decorator to a function is simple and descriptive. Nice!
I don't understand what your second suggestion adds. How is it different from simply def f(a,b): pass; def g(a=1,b=2): f(a,b). It's easy to only specify the defaults once if you only do so at the top. The problem is that I want f to work as a stand-alone function with sensible defaults by itself. The typical use case of f is not to be called by g. That's just one possible use of it.
I think your first solution would work, but I'm not too fond of it. I think it couples f and g too tightly. The user of g would basically be passing in an "f-parameters" object, which leaks the implementation detail that f is being used to implement g to the user.
@amaurea, Thanks for accepting my answer! I changed functional programming sample, so now tight dependency between f() and g() is broken. Also, my answer is conceptual one, and path that should be chosen depends on a nature of f and g.
3

My favorite, kwargs!

def f(**kwargs):
    kwargs.get('accuracy', 1e-3)
    ..

def g(**kwargs):
    f(**kwargs)

Of course, feel free to use the constants as described above.

3 Comments

This approach is similar to the None approach in that the actual defaults are defined inside the body of f. But with kwargs g can't rename the arguments, so if g calls both f1 and f2 and those functions have conflicting argument names, you have a problem. The kwargs approach also makes it obscure which arguments a function actually takes. So I'm not sure I prefer this approach to None in general.
You can pop and add to kwargs within g. The point of kwargs is so that you make the change in two places when calling a function many times throughout code, (i.e. you have a,b,c,...z that all call f) rather than all of them. YOu can still explicitly define the signature to f as f(accuracy=None...) if you choose as well.
If one builds multiple levels of functions calling each other with kwargs, then modifying kwargs in one function is dangerous, as it might rename an option that was intended for a sibling function. So one would have to make a copy of kwargs before modifying it. Still, the good thing about the kwargs approach is that g can now be agnostic even to the names and number of f's arguments, not just to their default values. If one adds a new parameter to f, then g automatically supports it (unless there's a collission).
3

Dovetailing with @unutbu:

If you are using a package structure:

mypackage
|
+- __init__.py
|
+- fmod.py
|
+- gmod.py
|
...

then in __init__.py put your constants as @unutbu suggests:

ACCURACY = 1e-3
NSTEP = 10
__all__ = ['ACCURACY', 'NSTEP']

then in fmod.py

from mypackage import *
def f(accuracy=ACCURACY, nstep=NSTEP):
    ...

and gmod.py and any other modules import your constants.

from mypackage import *
def g(accuracy=ACCURACY, nstep=NSTEP):
    f(accuracy, nstep)
    ...

Or if you are not using packages just create a module called myconstants.py and do exactly the same thing as with __init__.py except that instead of importing from mypackage you would import from myconstants.

One advantage of this style is that if later you want to read your constants from a file (or as arguments to a function) assuming it exists, you can put code in __init__.py or myconstants.py to do that.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.