Skip to main content

MVC - What a view should never contain( part 2)

Part 1
Ieri am promis ca revin cu un post despre ce nu ar trebuii sa contina un view.
Pornim de la o clasa PersonModel care are urmatoarea definitie:
public class PersonModel
{
public string Name { get; set; }
public int Age { get; set; }
public string Address { get; set; }
}
Pentru acest model avem urmatorul view:
@model PersonModel

@{
Layout = null;
}

<!DOCTYPE html>

<html>
<head>
<title>Person</title>
</head>
<body>
<fieldset>
<legend>PersonModel</legend>

<div class="display-label">Name</div>
<div class="display-field">
@Html.DisplayFor(model => model.Name)
</div>

<div class="display-label">Age</div>
<div class="display-field">
@Html.DisplayFor(model => model.Age)
</div>

<div class="display-label">Address</div>
<div class="display-field">
@Html.DisplayFor(model => model.Address)
</div>
</fieldset>
</body>
</html>
Mai tarziu apare o noua cerinta in aplicatie, ca sa se afiseze si coodonatele GPS a locatiei date. Pentru acest lucru se foloseste o adresa web care stie sa rezolve orice adresa. O implementare simpla este urmatoarea:
@model CIC.PersonModel
@{
Layout = null;

string coordLat;
string coordLong;

var request = WebRequest.Create(someAddress + "?location=" + Model.Address);
var webResponse = request.GetResponse();
using (var contentStream = new StreamReader(webResponse.GetResponseStream()))
{
var content = contentStream.ReadToEnd();
webResponse.Close();
string[] coords = content.Split(' ');
coordLong = coords[0];
coordLat = coords[1];
}
}
<!DOCTYPE html>
<html>
<head>
<title>Person</title>
</head>
<body>
<fieldset>
<legend>PersonModel</legend>
<div class="display-label">
Name</div>
<div class="display-field">
@Html.DisplayFor(model => model.Name)
</div>
<div class="display-label">
Age</div>
<div class="display-field">
@Html.DisplayFor(model => model.Age)
</div>
<div class="display-label">
Address</div>
<div class="display-field">
@Html.DisplayFor(model => model.Address)
</div>
<div class="display-label">
GPS Location</div>
<div class="display-field">
@coordLong
@coordLat
</div>
</fieldset>
</body>
</html>
Pagina functioneaza fara nici o problema doar ca in viewul face mult prea multe lucruri. Din el se face un request spre o alta componenta( serviciu), prin intermediul caruia se obtin coordonatele GPS a unei adrese, care se afiseza in view.
Modelul nu contine toate datele necesare pentru a afisa tot ce este necesar. Din aceasta cauza se ajunge sa se faca un apel spre o componenta externa. Nu are importanta daca aceasta componenta este un serviciu extern sau o clasa din assemblyul nostru. Toate datele necesare trebuie sa fie continute de catre model. Orice data trebuie sa ajunga in view prin intermediul modelului.
O solutie este sa adaugam doua propietati in PersonModel care sa reprezinte coordonatele GPS( sau putem sa ne cream o clasa care sa stocheze longitudinea si latitudinea - dar tot ca si o propietate din PersonModel o sa fie).
public class PersonModel
{
public string Name { get; set; }
public int Age { get; set; }
public string Address { get; set; }
public string Longitude { get; set; }
public string Latitude { get; set; }
}
Inițializarea coordonatelor urmeaza sa fie mutata in controler. Problema de la care am pornit a fost rezolvata, insa ... apare un smell la orizont. Nu foarte puternic, dar extrem de periculos. Apelul spre serviciu se face direct din controler, cea ce nu e normal. Da, este adevarat ca controlerul ar trebuii sa pregateasca modelul, dar obtinerea coordonatelor ar trebuii sa se faca in alt loc. De exemlu putem sa scoatem acest apel intr-o alta clasa.

Comments

  1. La capitolul asta - care sunt responsabilitatile controller-ului, programatorii inca au diverse pareri :)
    Ca model-ul e incarcat dintr-un web service, database sau fisier nu e asa important, normalvoi lucra doar cu o interfata injectata in controller, insa unii vor zice:

    - modelul trebuie sa stie sa se incarce singur ("ActiveRecord", mai ales prin lumea Ruby), iar controller-ul sa zica doar model.Load()

    - altii vor zice: nu, ActiveRecord e un anti-pattern, modelul trebuie sa fie cat mai dummy, si alta clasa sa fie responsabila de incarcarea lui ("service", "repository", "context" sau alte nume..) - mai ales prin lumea Java care vad modeul ca un DTO

    - dupa care vor veni altii si vor zice: DTO duce la anemic domain model, care e iarasi un anti pattern, si fie se intorc la prima varianta, fie la o combinatie de domain model + viewmodel..

    Lumea .NET, care a intrat mai tarziu in joc, se ia fie dupa unii fie dupa altii.. :)

    ReplyDelete
  2. Eu prefer ca action-ul din controller sa fie cit mai simplu ... asa ca , probabil, ActiveRecord, nu?

    ReplyDelete
  3. Asa cum spunea si Tudor pareriile sunt impartite. In cazul unui model pentru view eu il prefer cat mai simplu. Daca ar avea forma la ActiveRecord atunci in model pot sa ajunga obiecte sau date de care nu ai nevoie.
    Personal imi place sa trag o linie f. clara si precisa intre obiectele din business layer si cele din UI( model). Chiar daca cele doua coexista impreuna, prefer sa fie diferite, iar modelul sa contina doar datele de care am nevoie in view si atata. Orice camp in plus, care nu e folosit in view nu isi are rostul.

    ReplyDelete
  4. Multi prefera ca modelul sa fie chior (doar proprietati), view-ul chior (doar "lipirea" proprietatile din model pe HTML), controller-ul subtirel (apel servicii colo-colo) si inca un service layer.

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP