To me exception-handling can be great if everything conforms to RAII and you're reserving exceptions for truly exceptional paths (ex: a corrupt file being read) and you never need to throw across module boundaries and your entire team is on board with them......
Zero-Cost EH
Most compilers these days implement zero-cost exception handling which can make it even cheaper than manually branching on error conditions in regular execution paths, though in exchange it makes exceptional paths enormously expensive (there is also some code bloat from it, though probably not more than if you thoroughly handled every possible error manually). The fact that throwing is enormously expensive shouldn't matter though if exceptions are being used the way they're intended in C++ (for truly exceptional circumstances that shouldn't happen in normal conditions).
Side Effect Reversal
That said, making everything conform to RAII is much easier said than done. It's easy for local resources stored in a function to wrap them into C++ objects with destructors that clean themselves up, but it's not so easy to write undo/rollback logic for every single side effect that could occur in the entire software. Just consider how hard it is to write a container like std::vector which is the simplest random-access sequence to be perfectly exception-safe. Now multiply that difficulty across all the data structures of an entire large-scale software.
As a basic example, consider an exception encountered in the process of inserting items to a scene graph. To properly recover from that exception, you might need to undo those changes in the scene graph and restore the system back to a state as though the operation never occurred. That means removing the children inserted to the scene graph automatically on encountering an exception, perhaps from a scope guard.
Doing this thoroughly in a software that causes many side effects and deals with a lot of persistent state is much easier said than done. Often I think the pragmatic solution is to not bother with it in the interest of getting things shipped. With the right kind of codebase which has some kind of central undo system though and maybe persistent data structures and the minimal number of functions causing side effects, you might be able to achieve exception-safety across the board... but it's a lot of stuff you need if all you're going to be doing is trying to make your code exception-safe everywhere.
There's something practically wrong there because conceptually side effect reversal is a difficult problem, but it's made more difficult in C++. It's actually easier sometimes to reverse side effects in C in response to an error code than it is to do it in response to an exception in C++. One of the reasons is because any given point of any function could potentially throw in C++. If you're trying to write a generic container working with type T, you can't even anticipate where those exit points are, since anything involving T could throw. Even comparing one T with another for equality could throw. Your code has to handle every possibility, and the number of possibilities multiply exponentially in C++ whereas in C, they are much fewer in number.
Lack of Standards
Another problem of exception-handling is lack of central standards. There are some like hopefully everyone agrees that destructors should never throw since rollbacks should never fail, but that's just covering a pathological case that would generally crash the software if you violated the rule.
There should be some more sensible standards like never throw from a comparison operator (all functions which are logically immutable should never throw), never throw from a move ctor, etc. Such guarantees would make it so much easier to write exception-safe code, but we have no such guarantees. We have to kind of go by the rule that everything could throw -- if not now, then possibly in the future.
Worse, people from different language backgrounds look at exceptions very differently. In Python, they actually have this "leap before you look" philosophy which involves triggering exceptions in regular execution paths! When you then get such developers writing C++ code, you could be looking at exceptions being thrown left and right for things that aren't really exceptional, and handling it all can be a nightmare if you're trying to write generic code, e.g.
Error-Handling
Error handling has the disadvantage that you have to check for every possible error that could occur. But it does have one advantage that sometimes you can kind of check for errors slightly late in hindsight, like glGetError, to significantly reduce the exit points in a function as well as making them explicit. Sometimes you can keep running the code a little bit longer before checking and propagating and recovering from an error, and sometimes that can genuinely be easier than exception-handling (especially with side effect reversal). But you might not even bother with errors so much either in a game, maybe just shutting the game down with a message if you encounter things like corrupt files, out of memory errors, etc.
How it pertains to the development of a libraries used through
multiple projects.
A nasty part of exceptions in DLL contexts is that you cannot safely throw exceptions from one module to another unless you can guarantee that they are built by the same compiler, use the same settings, etc. So if you're writing like a plugin architecture intended for people to be used from all kinds of compilers and possibly maybe even from different languages like Lua, C, C#, Java, etc., often exceptions start to become a major nuisance since you have to swallow every exception and translate them to error codes anyway all over the place.
What does it do to using third party libraries.
If they're dylibs, then they cannot throw exceptions safely for reasons mentioned above. They'd have to use error codes which also means you, using the library, would have to constantly check for error codes. They could wrap their dylib with a statically-linked C++ wrapper lib you build yourself which translates the error codes from the dylib into thrown exceptions (basically throwing from your binary into your binary). Generally I think most third party libs shouldn't even bother and just stick to error codes if they use dylibs/shared libs.
How does it affect unit testing, integration testing, etc?
I generally don't encounter teams testing exceptional/error paths of their code so much. I used to do it and actually had code that could properly recover from bad_alloc because I wanted to make my code ultra-robust, but doesn't really help when I'm the only one doing it in a team. In the end I stopped bothering. I imagine for a mission-critical software that entire teams might check to make sure their code robustly recovers from exceptions and errors in all possible cases, but that's a lot of time to invest. The time makes sense in mission-critical software but maybe not a game.
Love/Hate
Exceptions are also my kind of love/hate feature of C++ -- one of the most deeply-impactful features of the language making RAII more like a requirement than a convenience, since it changes the way you have to write code upside down from, say, C to handle the fact that any given line of code could be an implicit exit point for a function. Sometimes I almost wish the language didn't have exceptions. But there are times when I find exceptions really useful, but they are too much of a dominating factor to me in whether I choose to write a piece of code in C or C++.
They would be exponentially more useful to me if the standard committee really focused heavily on establishing ABI standards which allowed exceptions to be safely thrown across modules, at least from C++ to C++ code. It would be awesome if you could safely throw across languages, though that's kind of a pipe dream. The other thing that would make exceptions exponentially more useful to me is if noexcept functions actually caused a compiler error if they tried to invoke any functionality that could throw instead of just calling std::terminate on encountering an exception. That's next to useless (might as well call that icantexcept instead of noexcept). It would be so much easier to ensure that all destructors are exception-safe, for example, if noexcept actually caused a compiler error if the destructor did anything that could throw. If that's too expensive to thoroughly determine at compile-time, then just make it so noexcept functions (which all destructors implicitly should be) are only allowed to likewise call noexcept functions/operators, and make it a compiler error to throw from any one of them.