Others have already explained that you need a unicode regex with unicode arguments to work with unicode properly; Python is likely storing '¤☃' in an encoded form, often UTF-8, which would store your input as '\xc2\xa4\xe2\x98\x83', and the regex itself would be '[^a-\xc3\xa5+_\\s]', which means your character class is excluding whitespace and ordinals from 97 to 195 (plus explicitly excluding 165, but that's in the previous range), not from ordinals 97 to 229 as you expected. Thing is, since the UTF-8 encoded input is represented by bytes in this range (aside from the e2 byte, which gets dropped), your output is only lightly filtered.
Even if you switch to using unicode properly, ord(u'¤') is 164, while ord(u'å') is 229; it correctly preserves ¤ because it's in the character class you've excluded from substitution.
You shouldn't be using regular expressions here, because it's not practical to exhaustively define all alphabetic and whitespace characters scattered across the Unicode range while excluding all the others. Instead, use the tools that actually use the Unicode database to inspect character properties:
>>> u''.join(x for x in u'a¤ ☃b' if x.isspace() or x.islower())
u'a b'
That's much clearer about exactly what you're trying to do, and it should be fast enough; the Unicode database that Python uses makes the cost of checking character attributes fairly trivial. If your inputs are arriving as str (encoded as UTF-8) and you must produce str output, you just convert to unicode, filter, then convert back:
>>> inp = 'a¤ ☃b' # Not unicode!
>>> inpuni = inp.decode('utf-8')
>>> outpuni = u''.join(x for x in inpuni if x.isspace() or x.islower())
>>> outp = outpuni.encode('utf-8')
>>> outp
'a b'