0

There is an intranet based ASP.NET application that is deployed to a server (IIS) and a group of clients (about ten). The end user can then decide to either connect to the local application (deployed to their local machine) or the server version. I do not understand the reasoning for doing this. My question is: is this common practice?

0

2 Answers 2

2

yes, it is a common practice to verify the performance of the application. Each client will have their own settings and as per process, application should not break in any kind of environment. it is always beneficial to put a server version and a local version.

Sign up to request clarification or add additional context in comments.

10 Comments

This answer doesn't make any sense. Why would you care about a client machine's performance of a program unless there was some reason to run the process on the client, ESPECIALLY if it is a web process? You don't "verify the performance" in production, anyway.
@pseudocoder, by writing " You don't "verify the performance" in production, anyway." you have directed that you have not worked in the live environment, because production is the environment where performance matters...no offence
@CodeSpread I see your point about performance metrics, but you still don't explain why you'd care about the performance of a server process on a machine that doesn't need to run that process for any apparent reason. The question is what reasons could there be to run web server code locally on a client.
Perforamance is only related to server. when all 10 people will choose server settings then performance and other things related to server will come into picture. And when we deploy it to 10 client then you are deploying your application to 10 different environment and there your application might break because of local settings.
@CodeSpread I see what you're getting at, but what you are describing to me is a lab or simulation environment where you are testing performance and testing compatibility. In contrast a production environment is where you are measuring and verifying performance, and compatibility is not a concern if the application is already deployed and working.
|
0

If the clients are laptops, and the application supports disconnected data sets and synchronization, it would make sense. Typically you'd see something like this when the client machines are taken off-network to be used at a remote work site.

5 Comments

Would you expect to see this for desktops?
Desktops as clients wouldn't fit my theory above but I have a feeling the real key is to examine the inner workings of the application to see what the difference is between "local" mode and "server" mode. That should lead you to why the system was designed the way it was.
Another theory that would go along with desktops is that maybe the system designers were worried about LAN outages or server outages. Again, if this was the case you'd see disconnected data sets and synchronization in the source code.
@pseudocoder, In web scenario, by any chance are you referring to mirroring of a site?
No I'm not talking about mirroring. I am talking about disconnected data, which is the only reason I can think of, in a production environment, that you would ever want to deploy a web application to BOTH a local webserver on the client and to a server.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.