You can use read_csv with some separator which is not in data like | or ¥:
import pandas as pd
from pandas.compat import StringIO
temp=u"""
k1[a-token]
v1
v2
k2[a-token]
v1'
k3[a-token]
v1"
v2"
v3"
"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp), sep="|", names=['B'])
print (df)
B
0 k1[a-token]
1 v1
2 v2
3 k2[a-token]
4 v1'
5 k3[a-token]
6 v1"
7 v2"
8 v3"
Then insert new column A with extract values with [a-token] and last use boolean indexing with mask by duplicated for remove rows with keys in values column:
df.insert(0, 'A', df['B'].str.extract('(.*)\[a-token\]', expand=False).ffill())
df = df[df['A'].duplicated()].reset_index(drop=True)
print (df)
A B
0 k1 v1
1 k1 v2
2 k2 v1'
3 k3 v1"
4 k3 v2"
5 k3 v3"
But if file have duplicated keys:
print (df)
B
0 k1[a-token]
1 v1
2 v2
3 k2[a-token]
4 v1'
5 k3[a-token]
6 v1"
7 v2"
8 v3"
9 k2[a-token]
10 v1'
df.insert(0, 'A', df['B'].str.extract('(.*)\[a-token\]', expand=False).ffill())
df = df[df['A'].duplicated()].reset_index(drop=True)
print (df)
A B
0 k1 v1
1 k1 v2
2 k2 v1'
3 k3 v1"
4 k3 v2"
5 k3 v3"
6 k2 k2[a-token]
7 k2 v1'
Then is necessary change mask to:
df.insert(0, 'A', df['B'].str.extract('(.*)\[a-token\]', expand=False).ffill())
df = df[~df['B'].str.contains('\[a-token]')].reset_index(drop=True)
print (df)
A B
0 k1 v1
1 k1 v2
2 k2 v1'
3 k3 v1"
4 k3 v2"
5 k3 v3"
6 k2 v1'
[a-token]just a marker that you put in here on SO to indicate the start of each group?