Goto day 1 roundup – distribute all the things

by DotNetNerd 25. September 2014 15:21

At other conferences I have attended the last year or so, distribution and concurrency have been hot topics, coupled with functional programming and immutability which leands itself well to these kinds of problems. Todays program has certainly been no exception - at least for the talks I ended up picking.

The right Elixir for concurrent fault tolerant systems

In the afternoon I ended up sticking with the bleeding edge track, which has been really interesting. First off was a talk on Idioms of distributed applications with Elixir by Jose Valim, who wrote the language. In his talk Jose went over the idioms of Erlang, which is what Elixir is built on top of. He did a good job at presenting why light weight processes that are allowed to fail fast and recreated by supervisors makes it possible to build fault tolerant distributed systems that are easier to understand and run faster than other paradigmes often based on handeling exceptions via try catch blocks.

More...

Robust integration with Redis on Azure and Polly

by DotNetNerd 13. July 2014 12:56

A client of mine requested an integration with OpenWeatherMap, so like so many times before it was a chance to think about how to make such an integration robust and performant. Its as common a task as they come, but also something that tends to end up feeling more complex than I would like. Having heard a lot of good things and played a bit with Redis I felt that it would be a good choice for providing super fast caching, while also allowing for more than basic key/value storage.

Getting off the ground

Redis on AzureThe project is already running on Azure, so it was an obvious choice to give Azures new Redis based caching service a go. As of now the service is still in preview, but the the level of caching I need I feel quite comfortable with it. Getting started was as easy as most things on Azure - click add, fill in a name and press go. As every day as this has become, I am still blown away by how easy and fast it is every time I need to provision a new VM or service – and a Redis cache is no different.

On the downside I am not quite convinced by the new Azure portal, because to me the UX feels more shiny than useful. As of this writing the caching service is only available through the portal, but inspite of my reservations it was easy to get going, and it provides a nice overview of requests, storage space used as well as cache hits and misses.

More...

NDepend review

by DotNetNerd 2. May 2014 15:13

Lately I have spent a bit of time with NDepend, who contacted me if I wanted a free license, in exchange for a blogpost. This was actually great timing on their part, as I was already thinking about giving it another go. Being completely honest I tried NDepend some years ago, and at the time I simply didn't know where to start and where I would get the most value from using such a tool. So back then I pretty much wrote it off, but have again and again heard good things from other developers who are using it.

My first thought when I ran NDepend this time around was that a lot had changed. The first thing that met me was a wizard for analysing a project, so I pointed it at my current project. This was the point I got derailed the first time I tried NDepend, because I remember being met by the code metric view which does look kind of scary - especially being new to a tool like this. Now however I was met by a dashboard, that is still complex, but a vast improvement since it gives a pretty good idea of some of the power that NDepend provides. I still can't help think that the tool could gain a lot by providing simpler guides through some key usecases though.

More...

BaaS - cloud based backend in a box

by DotNetNerd 14. November 2013 16:56

WAMobileServicesblue IMHO an overlooked part of the otherwise thoroughly hyped Cloud technologies is the so called baskend-as-a-service or BaaS technologies. Most presentations revolve around scalability and hosting, which are of course central and important, but non the less not the entire cloud story. This is something I have been looking a bit into, because I feel there is so much value in the cloud that we are not picking up on just yet.

More...

Nancy Bootstrapper for Castle Windsor

by DotNetNerd 18. January 2012 19:37

8e00fa6da668702f8b73ac4caebfbee4On a current project I have decided to use NancyFx for services that expose data to the client via Ajax. The solution already uses Umbraco for CMS capabilities and everything is wired up using Castle Windsor for DI.

From the start I was hoping to just install the nuget packages for hosting in an ASP.NET application and for bootstrapping with Windsor. As it turned out neither worked in my case.
Getting Nancy to run alongside an existing site is pretty well documented, so that went pretty smoothly, once I gave up on the package and just followed the documentation.

More...

Faceted search with MongoDB and KnockoutJS

by DotNetNerd 5. November 2011 16:52

Almost two years ago I took my first look at MongoDB as my first exploration into the NoSQL. Today I still find it to be one of the most interesting NoSQL tools – rivaled mainly by RavenDB, for most what I would call common scenarios. Recently I finally had the chance to use MongoDB in a real project, when we were talking over our options for doing faceted search, where the main design criteria was speed.

To get a good smooth user experience, the performance of the faceted search is critical. One if the limitations of a traditional SQL database is that everything is modelled as rows and columns. So when you need to have complex structured objects, you rely on joining tabels together and mapping them into objects. This is all fine and well in most cases, but when you aim to get the very best performance and the objects are well defined we can do better with tools like MongoDB. Not having to deal with schema reduces development time, when building something that should be denormalized, and avoiding mapping and multiple joins cuts the cost of a query runtime.

Takes two to perform

Just as it takes two to tango, it also takes good performance both backend and frontend, to provide the experience of good performance. To facilitate this I chose to use KnockoutJS – another tool I have blogged about earlier. knockout was used to handle two-way binding of the model and elements on the page, and Ajax for requesting the search results from the server, and updating the model.

Snapping together the logo pieces

As Scott Hanselman often describes it, modern tools should fit together well, giving the same feeling as lego pieces that snap together. This really was the feeling I had when I implemented the faceted search. Defining the model serverside, passing objects on through a service, and then having them serialized to JSon which in turn was made into KnockoutJS observables just felt smooth and painless.

The only thing I had to reconsider was using the LINQ implementation in NoRM, which isn’t quite good enough yet. This was however a small hickup, as the more native API that NoRM provides worked fine and was easy to use.

Looking back the actual implementation including doing indexes did not take long, and the performance just rocks. So this is without a doubt one if the more fun challanges I have had lately, and a solution that I feel proud of.

Dynamic dataaccess with Webmatrix Webpages

by dotnetnerd 11. October 2011 22:06

Today I went to a talk by Hadi Hariri about dynamics which was arranged for the ANUG user group in cooperation with the goto conference. The talk happened to fit very well with the first topic I had planned for my series on Webmatrix Webpages, which is dataaccess.

The talk was about not fearing dynamics, and some of the scenarios where it can provide some benefits over static types. Some of the scenarios Hadi talked about were DTO’s, ViewObjects and for DataAccess, which is exactly what WebMatrix WebPages utilizes.

The flood of Micro-ORMs

Over the last year or so a lot of so called Micro-ORMs have seen the light or day with some of the more popular ones being Simple Data, Dapper, PetaPoco and Massive. The reason for their popularity is that they provide a sense of simplicity in the vietnam of computer science as Ted Neward put it.

Each of these ORMs have their own focus, strengths and weaknesses – and some are more “Micro” than others. Compared to NHibernate or Entity Framework they are all very simple to get started with. For quite a few of them part of the strength is the return of good old SQL in stead of LINQ or some other abstraction.

WebMatrix.Data

When a new Webmatrix Webpages project is created it comes with its own Micro ORM out of the box. The ORM allows queries and commands to be executed, which are expressed as SQL statements. For queries data is returned as dynamic objects. So a regular query could be done like this.

var db = Database.Open("myDatabase");
dynamic user = db.QuerySingle("SELECT * FROM Users WHERE User_Id = @0", 123);

The big advantage of this approach is that you can select any fields, calculate fields, join with other tabels to your hearts content and you won’t have to write a class to represent each shape of the data returned. This also means that we can use the power of SQL and that we avoid overcomplicating things.  Because the distance from database, to query and then to the view is so short working with dynamic objects is not a problem. So if your domain is not too vast and complex life is good.

Doing inserts and updates is equally easy, but it is one of the areas where I find the ORM lacking. Most annoying is that it does not handle converting null to DBNull. Also while you do get extention methods to convert strings as int, DateTime etc, there is no option to get null instead of the default value of the datatype. So the code tends to get cluttered with parsing and conversions – if you don’t write the extentions yourself.

Clean Ajax

A nice surprise for me was how easy it is to expose data as JSon to enable Ajax when working with WebPages. All you have to do is create a WebPage that retrieves data, pass it to JsonEncode and write it to the response like this example shows.

var json = Json.Encode(user);
Response.Write(json);

Life does not get more simple than this, and it leaves you with this smooth feeling when moving data between server and client.

Conclusion

The dataaccess bits for WebMatrix have been fun to work with, and it has given me a great sense of freedom to get things done, without having to do viewobjects, mappers and a bunch of configuration.

The fluidity of working with dynamic data, and doing ajax certainly has opened my eyes with reguards to the value that this kind of framework can provide. “The return of SQL” has also reminded me, that LINQ is not all rainbows and smiles. The power of micromanaging a JOIN statement and doing UPDATES and deletes should not be overlooked.

Using the right tool for the right job is still the key phrase though. It has been a good match for this project I am working on, but I would not want to use it for an enterprise application. Testability is clearly not a goal of the framework, refactoring is error prone and when complexity increases the code tends to get messy.  So if anything I will argue that it proves that the place for WebMatrix is hobbyists, simple projects, startups and prototyping.

What is the Matrix?

by DotNetNerd 29. September 2011 20:50

Currently some of my spare time is spent building an application using WebPages in WebMatrix and Visual Studio Express, which has been fun and given me the chance to look at webdevelopment and quality from a different angle. WebMatrix is a free Microsoft tool that is targeted at hobbyists and developers doing lightweight development, by enabling users to build websites either from scratch or on top of open source sites such as Umbraco, Screwturn or dasBlog. All in all there currently 59 sites to choose from spread across the categories blogs, CMS, eCommerce, forums, galleries, tools and wikies.

matrix-5fed_imageCopy_ae5cb424_crop_ae8179a7

The application I am working on is built from scratch, because the requirements don’t really fit with any of the generic systems. For this purpose WebMaxtrix has the option to build a completely custom site using a framework called WebPages which utilizes the Razor viewengine.

At first look when you see WebPages, you might think of classic ASP or PHP, because it also works by mixing markup and code in the same file. I will admit that at first this made me cringe – as I think most professional developers who are used to building enterprise scale applications will. Having read a bit more about it on various blogs over the last year it cought my interest for some specific scenarios. The point to me is that it is quite powerful for quickly putting together applications where the complexity is manageable. So to get a startup on its feet quickly, to do a proof of concept or for those who dabble in webdevelopment as a hobby it can definately provide good value.

I my case it is actually a startup that I am looking to help my brother with – who just happens to dabble a bit in webdevelopment once in a while. So for a guy who understands the basics of the web, but is not familiar with the amount of abstraction that is involved in building an ASP.NET or MVC application, WebMatrix and Webpages seem like a good fit. Having implemented the first couple of pages and a basic structure I am pretty happy - hopefully I won’t wake up and think why, oh why, didn’t I take the blue pill.

And what is Quality?

A word that is often heard when discussing a framework like this is “quality” or more specifically lack there of. For a while I have actually thought about writing a comment on what quality is, because it is something all agree that they can and will deliver, but mostly when it comes time to define it people start arguing. If anything the tendency is that people argue that the skillset they personally possess is what defines quality. TDD guys argue for testability and code coverage, designers argue touch and feel of a site and so on and so forth.

Personally I always think back to one of my teachers who said that quality is to which degree a product lives up to what the customer expects. Which is pretty close to what google comes up with when referring to wikipedia and the definition based on the ISO 9000 standard which defines it as “degree to which a set of inherent characteristics fulfills requirements”. While I think this is a good definition it also leaves me with a sense that an important aspect is overlooked. As Henry Ford said “If I'd asked customers what they wanted, they would have said a faster horse". This could be interpreted to mean that the most important thing is guiding the customer and telling them what they never knew they always wanted. So my personal definition is:

Quality exceeds the customers original expectations and fulfills the requirements to a high degree. 

At least, that is the experience I want as a customer, and it is what gives me that amazing feeling of victory when we are able to provide it.

Reflecting on the topic of the blogpost I think that WebMatrix and WebPages can indeed deliver quality. I think so for a number of reasons, the obvious ones being if the requirements revolved around speed of delivery or the customers ability to participate with a basic set of knowledge about webdevelopment.

Now what?

This blogpost has been quite unusual for me, because it contains no code at all! Rest assured though, because my intent is to turn this into a little series of blogposts – I just wanted to lay the foundation. The current idea is that I will look at how deep the rabbithole goes, by digging into some of the building blocks of a WebPages application. Hopefully I will be uncovering what WebPages can be used for, and also some  techniques and tools that might be used in other contexts.

User Experience on the web - moving beyond jQuery

by DotNetNerd 31. August 2011 21:23

Providing a customized high quality user experience is becomming increasingly important on the web. It is no longer enough to provide information and functionality, but it also has to look and feel nice in a way that contributes in building the company brand. To do this we needed to shift some of the focus toward what is running on the client.

HTML5

Fattening up the client

At first the search for fatter and richer clients lead us to focus on Flash or Silverlight. While providing a rich design experience this also comes with its own set of limitations. Limitations such as requiring users to install a browser plugin that in turn removes the browser experience with linking and the options to copy text and images. Since then we have come full circle so we once again use *drumroll* JavaScript to power applications. With the emergence of HTML5 and CSS3 that will be running across computers, tablets and phones it looks like JavaScript and friends will continue to play a key role in developing tomorrows applications.

Curing the JavaScript headacke

A few years ago I - like most developers at the time - did not like JavaScript. We saw it as a necessary evil to allow validation, and something we needed to fight with once in a while to show/hide elements on webpages. JavaScript suffered from a range of illnesses such as browser incompatibility, performance issues, lack of widely adopted development practices and a rap for being brittle.

browsers

Along came jQuery - a JavaScript library that did a good job at shielding the developer from browser incompatibility while providing a simple API for doing DOM manipulation and providing structure for plugins to be built. The library has since become so popular that some developers talk about "writing jQuery" rather than JavaScript.

Thanks to the need for better user experiences, jQuery and in no small part the technological advances in the browser reguarding performance, we developers spend more and more time writing JavaScript. While jQuery has helped a great deal there are still areas where working with JavaScript seems unstructured and primitive in reguards to expressiveness and robustness. So lets look at some options we have to accomodate those needs.

Underscore - the functional tie

Underscore is a JavaScript library that prides itself in being "the tie to go along with jQuery's tux". Basically it provides a utility-belt for doing functional-style programming in JavaScript. This is important because regular JavaScript constructs tend to be so low-level that "what" you are doing is drowning in "how" you are doing it. With underscore you get quite of bit more expressiveness, making code easier to read and maintain. Besides the functional core underscore also provides a way through some shortcommings of JavaScript - like binding functions to objects - and it has templating capabilities.

This sample shows off some of the functional capabilities, as well as how to use templates.

var sumOfEvens = _(_.range(1, 50))
.chain().map(function(number) {return number * 2;})
.reduce(function(sum, number) {return sum + number;})
.value();

var template = _.template("The sum of even numbers between 1 and 100 is <%= sum %>");
console.log(template({sum : sumOfEvens}));

"Grow a spine"

Spine and Backbone are two very similar frameworks that provide structure to applications written in JavaScript through an MVC pattern. In more and more scenarios each page can be seen as a small application by itself, so structure is becoming as important as it is on the server. Spine is the smallest of the two, so I will focus on that - but mostly you can assume that Backbone works pretty much the same way. Fundamentally they allow you to work with classes so you get a way to do inheritance. Building on that you can create models for persisting data and controllers (which are called views in backbone) that give you a structure for defining events and functions. To support this the libraries also have functions for working with events and a way to handle routing with hashtags.

This sample shows how to work with a model that is persisted to local storage, how to handle events and how to respond to routing.

var Contact = Spine.Model.setup("Contact", ["id", "first_name", "last_name"]);
Contact.extend(Spine.Model.Local); //Saves to local storage

var eric = Contact.init({id: 1, first_name: "Eric", last_name: "Cantona"});
eric.bind("save", function(){ console.log("Saved!"); });
eric.save();

var ryan = Contact.init({id: 2, first_name: "Ryan", last_name: "Giggs"});
ryan.save();

var App = Spine.Controller.create({
    init: function(){
        this.routes({
            "/users/:id": function(id){
                var c = Contact.find(parseInt(id));
                console.log("Name: " + c.first_name + " " + c.last_name);
            },
            "/users": function(any){
                Contact.each(function(c){
                    console.log("Name: " + c.first_name + " " + c.last_name)
                });
            }
        });
    },
    events: {"click input": "click"},
      click: function(event){
        var c = Contact.find(1);
        console.log("Name: " + c.first_name + " " + c.last_name);
      }
}).init({el: $("#Players")});

Spine.Route.setup();

I have previously written a bit about KnockoutJS, which can be seen as an alternative to using Spine or Backbone. Rather than provding a MVC-like structure Knockout allows you to work with a MVVM model with two-way databinding. I will not try and argue that the Spine/Backbone or KnockoutJS approach is "better", but leave you with the typical developer cop out "it depends". The design decision that you face really is, what brings greater value in your case, modularity, eventhandling and routing or two-way databinding, templating and dependency tracking?

"Testing testing 1-2-3 - is this thing on?"

The last piece of the puzzle is to introduce testing which should help us write and maintain more robust applications. QUnit is a simple unit testing framework that is popular amongst the people who work with jQuery and plugins for jQuery. QUnit lets you group tests into modules, so you get a nice overview when you run the testsuite.

add = function(a, b) {return a + b};

module("Math module");

test("addition", function() {
  expect(2);
  equal( add(2, 2), 5, "failing test" );
  equal( add(2, 2), 4, "passing test" );
});

jasmine_logo

For those who prefer a BDD style framework Jasmine is a popular choice. Besides having a different style Jasmine also has functionality to work with spies, for some more advanced testing scenarios. Both frameworks provice a clean and easy to read syntax, so choosing between the two comes down to taste or if there is some small feature in either that you like.

function Calculator() {}

Calculator.prototype.add = function(a, b) {
  this.log(a + " + " + b + " = " + (a + b));
  return a+b;
};

Calculator.prototype.log = function(text) {
  console.log(text);
};

describe("Calculator", function() {
  var sut;
 
  beforeEach(function() {
    sut = new Calculator();   
  });

  it("should be able to add", function() {
    var result = sut.add(2, 2);
    expect(result).toEqual(4);      
  });
 
  it("should log what is added", function() {
    spyOn(sut, 'log');
    var result = sut.add(2, 2);
    expect(sut.log).toHaveBeenCalledWith("2 + 2 = 4");
  });
});

MiniMe–opinionated JavaScript and CSS bundling

by dotnetnerd 6. July 2011 20:03

Why MiniMe?

For a while I have been using SquishIt to minify, bundle and version JavaScript and CSS files – and to a large extent it did a good job. However on a number of occations I ran into a group of scenarios where it just didn’t quite do enough for what I wanted. So when I was starting a new project and ran into the same issues again, I decided to take a look at making my own.

The basic idea behind MiniMe is that it should make it easy to bundle JavaScript and CSS files across masterpages, usercontrols etc. with the option to control how they are sorted, and end up with one file that is minified and versioned. It should also be easy to introduce into an existing project, with a minimal amount of refactoring, and lastly it should be easy to adhere to best practices and inject the script tag at the very bottom of the html page.

These are the requirements that I run into again and again, so I wanted a tool that did exactly that.

Getting started

To make it as easy as possible I made a nuget package, so all you need to get off the ground is to search for MiniMe in the package manager and hit install.

Building a complete file

Now you have access to the classes MiniJavaScriptBuilder and MiniStyleSheetBuilder, that can be used to build either a JavaScript or CSS file. The approach is similar, so from how on I will just show the JavaScript case. Using either one you can build a collection of files by calling Add or AddToRequest, which takes a path to the file you wish to add. The difference is that Add is local to the instance of the builder, where the ToRequest version is stored for the request across any number of builders. Both methods return the builder instance, so calls to Add/AddToRequest can be chained.

@{ new MiniMe.MiniJavaScriptBuilder()
    .AddToRequest(Url.Content("/scripts/myFirstScriptFile.js"))
    .AddToRequest(Url.Content("/scripts/mySecondScriptFile.js"), 1)         
    }

When using AddToRequest you can optionally pass an index as a second parameter. Files with a lower index are included before those with a higher index – allowing files that are added from usercontrols to run after those added in the masterpage.

Manually rendering a combined file

When all your files have been added you can call either Render or RenderForRequest, which will behave differently depending on if you have turned debug on in web.config or not. If you are in debugmode there is simply rendered a reference to each file, allowing you to debug like you are used to. If you are NOT in debugmode the files that were added will be combined and saved to the path you pass to the method. Writing a # as part of the path will enable versioning, so the # is replaced by a hashvalue of the filecontent. Versioning will make sure the filename changes when any of the files are changed, so caching does not prevent your users from getting any changes that you have made.

@{ MvcHtmlString.Create(new MiniMe.MiniJavaScriptBuilder()
.AddToRequest(Url.Content("/scripts/myFirstScriptFile.js"))
.AddToRequest(Url.Content("/scripts/mySecondScriptFile.js"), 1)
.RenderForRequest(Url.Content("~/Scripts/Site_#.min.js")))
}

Automatically injecting a combined file

Working with complex layouts can be a pain, because you have to take into account the order the usercontrols are rendered, and you will have duplication of code to render the files. To solve this MiniMe comes with an HttpHandler that will handle the rendering for you. This means that files that are added to the request, will be bundled, the combined JavaScript is referenced from the very bottom of the page and the Stylesheet is referenced from the header. All you have to do is add the HttpModule.

<system.webServer>
    <modules>
        <add name="MiniHttpModule" type="MiniMe.MiniHttpModule, MiniMe"/>

By default the HttpModule renderes the files to “/Scripts/Site_#.min.js” and “/Content/Site_#.css” – this can be overwritten using appSettings

<appSettings>
    <add key="MiniJsRelativePath" value="/Scripts/OtherSite_#.min.js"/>
    <add key="MiniCssRelativePath" value="/Content/OtherSite_#.min.css"/>

Create .min.js versions of all JavaScript files

In some cases you might want to have MiniMe generate .min.js versions for any files that have not yet been minified. This will also give you a slight performance boost, because MiniMe will not have to do the minification on the fly when files are combined. It is important to note that MiniMe will only make minified versions when no minified version already exist. Personally my preference is to do it when not running in debugmode, because then I won’t have to delete the minified versions when I make chances, in order for MiniMe to generate new ones.

if (!HttpContext.Current.IsDebuggingEnabled) new MiniGenerator().EnsureMiniVersions("/Scripts");

Go to the source to gain more insight or contribute

MiniMe is hosted on bitbucket, so if you wish to see how it works, or if you want to contribute please don’t hesitate. The first version was focused around the features I felt were missing, but there are undoubtedly other scenarios that can provide value.

Who am I?

My name is Christian Holm Diget, and I work as an independent consultant, in Denmark, where I write code, give advice on architecture and help with training. On the side I get to do a bit of speaking and help with miscellaneous community events.

Some of my primary focus areas are code quality, programming languages and using new technologies to provide value.

Microsoft Certified Professional Developer

Microsoft Most Valuable Professional

Month List

bedava tv izle