0

I have an unicode encoded (with BOM) source file and some string that contains unicode symbols. I want to replace all characters that not belong to a defined character set with an underscore.

#  coding: utf-8 
import os
import sys
import re

t = "🙂 [°]    \n  € dsf $ ¬ 1 Ä 2 t3¥4Ú";
print re.sub(r'[^A-Za-z0-9 !#%&()*+,-./:;<=>?[\]^_{|}~"\'\\]', '_', t, flags=re.UNICODE)

output:     ____ [__]    _  ___ dsf _ __ 1 __ 2 t3__4__
expected:   _ [_]    _  _ dsf _ _ 1 _ 2 t3_4_

But each character is replaced by a number of its underscores that may be equal to the bytes in its unicode representation.

Maybe an additional problem:

In the actual problem the strings is read from a unicode file by another python module and I do not know if it handles the unicodeness correctly. So may be the string variable is marked as ascii but contains unicode sequences.

3
  • And that's why we operate on text, not bytes. Commented May 7, 2018 at 15:43
  • just a hint, as i have no idea if this works in python. I used this regex to look for unicode characters: /[\u007F-\uFFFF]/, works fine in javascript.. Commented May 7, 2018 at 16:19
  • If you add these [$@~]` the whole thing can be replaced with [^\x20-\x7e] but this also will match control char's as well. Commented May 7, 2018 at 17:06

2 Answers 2

3

Operate on Unicode strings, not byte strings. Your source is encoded as UTF-8 so the characters are encoded from one to four bytes each. Decoding to Unicode strings or using Unicode constants will help. The code also appears to be Python 2-based, so on narrow Python 2 builds (the default on Windows) you'll still have an issue. You could also have issues if you have graphemes built with two or more Unicode code points:

#  coding: utf-8 
import re

t = u"🙂 [°]    \n  € dsf $ ¬ 1 Ä 2 t3¥4Ú";
print re.sub(ur'[^A-Za-z0-9 !#%&()*+,-./:;<=>?[\]^_{|}~"\'\\]', '_', t, flags=re.UNICODE)

Output (on Windows Python 2.7 narrow build):

__ [_]    _  _ dsf _ _ 1 _ 2 t3_4_

Note the first emoji still has a double-underscore. Unicode characters greater than U+FFFF are encoded as surrogate pairs. This could be handled by explicitly checking for them. The first code point of a surrogate pair is U+D800 to U+DBFF and the second is U+DC00 to U+DFFF:

#  coding: utf-8 
import re

t = u"🙂 [°]    \n  € dsf $ ¬ 1 Ä 2 t3¥4Ú";
print re.sub(ur'[\ud800-\udbff][\udc00-\udfff]|[^A-Za-z0-9 !#%&()*+,-./:;<=>?[\]^_{|}~"\'\\]', '_', t, flags=re.UNICODE)

Output:

_ [_]    _  _ dsf _ _ 1 _ 2 t3_4_

But you'll still have a problem with complex emoji:

#  coding: utf-8 
import re

t = u"👨🏻‍👩🏻‍👧🏻‍👦🏻";
print re.sub(ur'[\ud800-\udbff][\udc00-\udfff]|[^A-Za-z0-9 !#%&()*+,-./:;<=>?[\]^_{|}~"\'\\]', '_', t, flags=re.UNICODE)

Output:

___________
Sign up to request clarification or add additional context in comments.

4 Comments

Your string with complex emoji is composed of 11 characters, and 11 underscores are returned.
@IgnacioVazquez-Abrams hence the problem. Visually the complex emoji looks like one character (or four, on my current browser). Not an easy problem to solve. The 3rd party regex module can detect graphemes, but I'm just pointing out the problem is still complex.
thanks, but what if (my addition at the end) the string is defined somewhere and is not a unicode string but a normal string that was read in without proper encoding set. Ho can i treat this string (that is mor kind of a byte array) as an unicde string?
@vlad_tepesch .decode() the string with the correct encoding.
-1

How about:

print(re.sub(r'[^A-Öa-ö0-9 !#%&()*+,-./:;<=>?[\]^_{|}~"\'\\]', '_', t))

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.