*whether it is possible to write a program which will take an arbitrary regexp and contruct another which will act as it's negation against all possible input strings?*
A program to do it? Sure! An efficient program? No, at least in the classical regex sense. To negate a regex, you convert it to an NFA to a DFA, complement the DFA (invert accept/reject states), and convert that back to a regex. This is basic stuff from a first course in CS theory. The problem is that this is really inefficient. The NFA->DFA step introduces an exponential blowup in size. Even the special case of deciding whether the negation of a regex is the empty regex (the regex that accepts nothing) is PSPACE-complete (that means it's bad), let alone trying to compute more arbitrary regex negations.
That aside, I've been working with someone else on a suite of modules for dealing with regular languages & finite automata that will support negations in exactly this way, if you'd ever want to see how it actually goes. It will eventually allow standard Perl regexes as input as well, but of course it will be very slow for moderately-sized regexes. Even still, I wouldn't recommend such a module for everyday use -- it would be much simpler to rewrite the logic surrounding the regex, or use one of the tricks mentioned above in this thread, like negative lookahead.
*
I would imagine you'd have to restrict the definition of "regular expression" to something a little less rich than the full perl set (isn't there a compsci definition?).*
Yes, the classical CS definition allows simply the "|" (alternation), "*" (repetition), and concatenation operators. No backrefs as in Perl, no lookaheads, and certainly no embedded Perl code ;)
*Presumably if regexps form a turing complete language ...*
The expressibility of classical regexes is as far from Turing-complete as we know how to get ;) Extending them to include backreferences at least gives them the expressibility of NP, but they are still not Turing-complete.
| [reply] |

| [reply] |

Actually, it's entirely possible that what you describe is Turing-complete. The fact that a regex search and replace is one computing step doesn't place any particular limitation on the expressive power of your scheme.
The same is true for an LR parser, which repeatedly uses a finite automaton to find handles. Finite automata can only parse regular languages, yet LR parsers can cope with any language parseable by a deterministic finite automaton, which is a strictly larger set. (For example, it includes the language of expressions with matching parentheses, which is not a regular language.)
| [reply] |

Regular expressions, at least using the computer science definition, are equivalent in expressive power to the regular langauges, hence their name. This means they can be defined in terms of a deterministic or non-deterministic finite automata. Adding a stack would give us a push-down automata, which can recognize context-free languages. Adding a second stack gives us something equivalent in power to a Turing machine. | [reply] |

Thanks. I have a limited CS background (some register machines, recursive and primitive recursive functions) and this gives me quite a few pointers to picking up some more.
| [reply] |

Comment onRe: Negating Regexes: Tips, Tools, And Tricks Of The Trade