10

I'm in the process of doing the analysis of a potentially big web site, and I have a number of questions.

The web site is going to be written in ASP.NET MVC 3 with razor view engine. In most examples I find that controllers directly use the underlying database (using domain/repository pattern), so there's no WCF service in between. My first question is: is this architecture suitable for a big site with a lot of traffic? It's always possible to load balance the site, but is this a good approach? Or should I make the site use WCF services that interact with the data?

Question 2: I would like to adopt CQS principles, which means that I want to separate the querying from the commanding part. So this means that the querying part will have a different model (optimized for the views) than the commanding part (optimized to business intend and only containing properties that are needed for completing the command) - but both act on the same database. Do you think this is a good idea?

Thanks for the advice!

1 Answer 1

8
  1. For scalability, it helps to separate back-end code from front-end code. So if you put UI code in the MVC project and as much processing code as possible in one or more separate WCF and business logic projects, not only will your code be clearer but you will also be able to scale the layers/tiers independently of each other.

  2. CQRS is great for high-traffic websites. I think CQRS, properly combined with a good base library for DDD, is good even for low-traffic sites because it makes business logic easier to implement. The separation of data into a read-optimized model and a write-optimized model makes sense from an architectural point of view also because it makes changes easier to do (maybe some more work, but it's definitely easier to make changes without breaking something).

However, if both act on the same database, I would make sure that the read model consists entirely of Views so that you can modify entities as needed without breaking the Read code. This has the advantage that you'll need to write less code, but your write model will still consist of a full-fledged entity model rather than just an event store.

EDIT to answer your extra questions:

What I like to do is use a WCF Data Service for the Read model. This technology (specific to .NET 4.0) builds an OData (= REST + Atom with LINQ support) web service on top of a data model, such as an Entity Framework EDMX.

So, I build a Read model in SQL Server (Views), then build an Entity Framework model from that, then build a WCF Data Service on top of that, in read-only mode. That sounds a lot more complicated than it is, it only takes a few minutes. You don't need to create yet another model, just expose the EDMX as read-only. See also http://msdn.microsoft.com/en-us/library/cc668794.aspx.

The Command service is then just a one-way regular WCF service, the Read service is the WCF Data Service, and your MVC application consumes them both.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks Roy, this makes sense :) Still one question: if you have a WCF service in between the ASP.NET MVC web app and the two models (query and model), do you create additional data contracts (so you need extra mapping) or do you expose the models as data contracts themselves? And would you use a REST-enabled service or plain WCF?
Ludwig, I modified my answer to answer your extra questions.
Great, this answers all my questions! Thanks!!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.