0

My WCF service is keeping DB connections for futher sending SQL through they. Sometimes connection become broken for various reasons. Early there was special timer that checks connections every 1 minute. But it's not so good solution of problem. Could you please advice me some way to keep connections working properly or even though reconnect as soon as possible to deliver user stable service. Thanks!

EDIT: Database server is Oracle. I'm connecting to databse server using devart dotConnect for Oracle.

3
  • 1
    Pooling database connections is generally not a great idea, scenario depending (which isn't exactly stated) I would tend to go with a "unit of work" connection each time. Any pooling benefits can and will come from the innards of the connection pool itself. Commented Oct 4, 2011 at 11:09
  • Post some code. That might help us to answer the question with code from your context... Commented Oct 4, 2011 at 11:11
  • @Mr. Disappointment: this would already be the answer and not a comment ;-) Commented Oct 4, 2011 at 11:11

1 Answer 1

4

You don't have to "keep" database connections. Leave the reuse and caching of database connections to the .net framework.

Just use this kind of code and dispose the connection as soon as you are finished using it:

using(var connection = new SqlConnection(...))
{
  //Your Code here
}

There is no problem in executing the code above for each call to the database. The connection information is cached and the second "new" connection to the database is very fast.

To read more about "ConnectionPooling" you might read this MSDN Articel.

Edit:

If you use pooling the connection is not really close but put back to the pool. The initial "handshake" between the client and the database is only done once per connection on the pool.

The component you are using supports the connection pooling as well:

Read1 Read 2

Sign up to request clarification or add additional context in comments.

22 Comments

Thanks for your response. There is a lot of intensively sending requests for DBs, so it's getting much faster with keeping all db connections open.
I don't think so. I've done a transaction system that did millions of database calls during a day and there was no big performance hit. Sharing a connection over more that one Unit of Work can be very dangerous and error prone.
But wrong. every time a conenction is ataken out of the pool, a reset sequence is executed to test the connection is still valid and to reset any / all fragments still left over from the old connection. Clearly visible in the trace of all sql running.
That said, the overhead normally is neglegible comapred to the rurnning sql statements.
I know - but reusing a known connection is stll faster. We optimized it that way with a system doing like a couple of tens of thousands connections per second.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.