Polynomial regular expression used on uncontrolled data¶
ID: py/polynomial-redos Kind: path-problem Severity: warning Precision: high Tags: - security - external/cwe/cwe-1333 - external/cwe/cwe-730 - external/cwe/cwe-400 Query suites: - python-code-scanning.qls - python-security-extended.qls - python-security-and-quality.qls
Some regular expressions take a long time to match certain input strings to the point where the time it takes to match a string of length n is proportional to nk or even 2n. Such regular expressions can negatively affect performance, or even allow a malicious user to perform a Denial of Service (”DoS”) attack by crafting an expensive input string for the regular expression to match.
The regular expression engine provided by Python uses a backtracking non-deterministic finite automata to implement regular expression matching. While this approach is space-efficient and allows supporting advanced features like capture groups, it is not time-efficient in general. The worst-case time complexity of such an automaton can be polynomial or even exponential, meaning that for strings of a certain shape, increasing the input length by ten characters may make the automaton about 1000 times slower.
Typically, a regular expression is affected by this problem if it contains a repetition of the form
r+ where the sub-expression
r is ambiguous in the sense that it can match some string in multiple ways. More information about the precise circumstances can be found in the references.
Modify the regular expression to remove the ambiguity, or ensure that the strings matched with the regular expression are short enough that the time-complexity does not matter.
Consider this use of a regular expression, which removes all leading and trailing whitespace in a string:
re.sub(r"^\s+|\s+$", "", text) # BAD
"\s+$" will match the whitespace characters in
text from left to right, but it can start matching anywhere within a whitespace sequence. This is problematic for strings that do not end with a whitespace character. Such a string will force the regular expression engine to process each whitespace sequence once per whitespace character in the sequence.
This ultimately means that the time cost of trimming a string is quadratic in the length of the string. So a string like
"a b" will take milliseconds to process, but a similar string with a million spaces instead of just one will take several minutes.
Avoid this problem by rewriting the regular expression to not contain the ambiguity about when to start matching whitespace sequences. For instance, by using a negative look-behind (
^\s+|(?<!\s)\s+$), or just by using the built-in strip method (
Note that the sub-expression
"^\s+" is not problematic as the
^ anchor restricts when that sub-expression can start matching, and as the regular expression engine matches from left to right.
As a similar, but slightly subtler problem, consider the regular expression that matches lines with numbers, possibly written using scientific notation:
^0\.\d+E?\d+$ # BAD
The problem with this regular expression is in the sub-expression
\d+E?\d+ because the second
\d+ can start matching digits anywhere after the first match of the first
\d+ if there is no
E in the input string.
This is problematic for strings that do not end with a digit. Such a string will force the regular expression engine to process each digit sequence once per digit in the sequence, again leading to a quadratic time complexity.
To make the processing faster, the regular expression should be rewritten such that the two
\d+ sub-expressions do not have overlapping matches:
Wikipedia: Time complexity.
James Kirrage, Asiri Rathnayake, Hayo Thielecke: Static Analysis for Regular Expression Denial-of-Service Attack.
Common Weakness Enumeration: CWE-1333.
Common Weakness Enumeration: CWE-730.
Common Weakness Enumeration: CWE-400.