Day 1 with Microsoft Band

by Matt Milner 31. March 2015 22:16

Since it was introduced I’ve been fascinated with the Microsoft Band. I’ve known people with fitness trackers that gave information on things like steps, or even monitored sleep, but they never really interested me. I think it was because the devices themselves were so simple and did only a few things. Plus there was no interaction with the devices. I really liked that the Band provided lots of monitoring options (heart rate, sleep, steps, calories, workouts, runs, cycling, UV) plus had the built in UV and other sensors. It just seemed like a significant step up from other fitness devices. I also liked that it paired with an app on the phone, but didn’t require the app for many functions.  I finally bit the bullet and bought one, and these are my first impressions. I’ll write more after some significant use.

Purchase and unboxing

I’d looked at the Band at the Microsoft store in my home town several times and tried it on. However, they never seemed to have it in stock. I finally decided that it was just easier to order it from the Microsoft Store. I ordered it last week and got the free shipping so it arrived yesterday. I think Microsoft has really stepped up their game on the packaging of their hardware products. Whether it is the fully recyclable packaging on the XBox or the simple packaging of the Band in a simple box with the device, charging cord, and a few quick pamphlets (quick start and warranty). I took the Band out and plugged it in to charge. Immediately upon starting it went into pairing mode looking for my phone. After I installed the Microsoft Health app on the phone, the pairing was simple. The Band needed an update which I thought would be a giant pain, but the phone downloaded the update and flashed it on the device. The whole thing took about 2 minutes total and I was up and running.

Fit

One of my biggest concerns was how the Band would feel when wearing it. I’ve heard many people complain about it being bulky and inflexible. When I’d tried the Band on at the store it definitely felt awkward and didn’t seem to fit well despite my confidence that I had the right size. The semi-rigid strap had me concerned. Honestly though, when I got the device and put it on, it immediately felt more comfortable than the one in the store. I’m not sure why that is, and it may be all psychological since I’d committed to buying it, but it definitely felt better. After wearing it for a day, I can say that it isn’t any more uncomfortable than a regular watch. I haven’t worn a watch for several years, so for me to not really be bothered by it is something.

I still haven’t decided if I like the display on top or bottom of my wrist. Given the orientation of the display, it is much easier to read it when the Band is worn “under” the wrist. But it feels more natural, and less like to scratch the display, to wear the display on top of my wrist. I think I’ll have to keep trying it both ways to figure out what I like. It’s also possible I’ll like different positions for different activities.

What I Tried

I want to really do my best to take this thing through the paces and try everything.

wp_ss_20150331_0001Last night I used the sleep tracker and it was cool to see how it tracked the time to get to sleep, when I woke for a few minutes at midnight, and the various depths of my sleeping. The heart rate and overall sleep metrics were nice to see. I’m looking forward to seeing some of this data over time and then using it in conjunction with data about calories burned and other information. I think I might need to keep track of general feelings of how refreshed I feel on given days, how strong I feel during workouts, and how productive I am to use alongside the data the Band provides.

 

 

 

 

 

 

 

wp_ss_20150331_0002Today I did a weight workout and so just as I started up I hit the workout tile then the action button to start tracking the workout. The tracking was simple, but helpful. It showed me the time elapsed of my workout plus my current heart rate and calorie burn. I was mostly interested in the heart rate since my workout duration was set and I wasn’t trying to hit a calorie goal, but it’s nice to have those other things for different types of workouts.

 

 

 

 

 

 

 

I tried the UV sensor a few times and got varying results. Once, with the sun on my arm/Band I got no reading. Another time I got a nice reading telling me that it would take about an hour for me to burn. I’ll need to try it out again and see if I can get more consistent output.

I also really like the haptic feedback when receiving text messages, Facebook messages, or emails. I don’t always feel my phone vibrate when it is in my pocket, but even on the lowest setting I feel the Band vibrate. I didn’t tend to read the email or text messages on my phone, but the general notification and ability to do a quick glance to see if it was something I needed to pull my phone out for was a nice feature.

I also put my Starbucks card into the Starbucks tile, but didn’t have the chance to try it out. It’s just a simple bar code, so I’m sure it will work fine, but the logistics of getting the watch in front of the scanner may not work as well as it does for a phone.

What I Want to Try

Tomorrow I’m going for a run and going to try the run tracking capability without my phone. I typically run with a Garmin GPS watch and I’m going to take that along as my baseline to see how well the GPS in the Band does in comparison. I like that I’ll be able to track my heart rate without a chest strap (something that’s kept me from using the heart rate monitor with my Garmin for running or other workouts).

I haven’t tried the Cortana integration yet either but I’d like to try it for some basic searches and setting up reminders or calendar entries.

 

All in all, I’ve enjoyed my first day with the Band and I’m happy with my purchase. I’m looking forward to really trying out all aspects of the device. And of course, as a developer, I’ve downloaded the SDK preview and I’m thinking of things I can develop that will integrate with the Band.

Tags:

Personal | Microsoft Band

How many schedulers do we need in Windows Azure?

by Matt Milner 10. February 2014 00:14

Four; we need four ways to do background jobs in Azure.

At the time of this writing, I count four (4) ways to create background or scheduled jobs in Windows Azure. Now, Azure is getting pretty big, and while four seems like a lot, in all likelihood, I may have missed one or two others. Do we need this many ways to run background work? Is this another case of Microsoft delivering multiple ways of doing the same thing? [Linq to SQL, EF . . ahem]. 

While it is quite possible that there is a case to be made that this may be the result of various teams creating the same functionality, I think each of these current offerings provides slightly different functionality and it is important to have them. That said, you can certainly find many cases of overlap. 

The four offerings:

  1. Azure Worker Roles (Cloud Service)
  2. Azure Scheduler service
  3. Azure Mobile Services scheduler
  4. Azure Websites Web Jobs

 

Azure Cloud Service Workers

What are they

A worker role is not so much a scheduled job engine as it is an always on machine that can process work. When you create a worker role you implement a Run method which is invoked at machine startup. From there you are free to create threads to do a variety of processing tasks: process messages from an Azure queue or Service Bus Topic/Queue, periodically execute arbitrary code, almost anything you can imagine. You write the code and deploy it to any number of worker role instances, which are essentially virtual machines pre-configured and setup with your code.

Why you might need them

A worker role allow you to write arbitrary code and run it at scale. Because you are running on a virtual machine you have dedicated memory and CPU for your processing. Workers can be scaled up or scaled out enabling you to choose more memory and processing power on individual instances as well and choosing the number of instances to run. Additionally, because the virtual machine is dedicated to your worker role you are able to install third party software on the machine as part of your deployment using startup tasks.

Of all the options, this one gives you the most control over scaling the job processing nodes and controlling the software available for your code to use. However, for smaller jobs, it can be overkill.

 

Azure Scheduler Service

What are they

The scheduler service currently provides two actions to be invoked on a schedule: HTTP requests or sending a message to an Azure queue. You can setup one-time jobs that are invoked immediately or started a future date and time. You can also setup recurring jobs with a typical scheduling capability on timed intervals, days of the week, monthly, etc.

To setup a scheduled job you must first create a job collection associated with a data center region. This is because the jobs will be running in cloud services in those data centers. Once the job collection is created you can add a number of jobs to the collection. The amount of jobs and frequency is determined by the scaling selection (currently free or standard options). Jobs can be created using the Windows Azure Management Portal or using the REST API. History is provided for job executions and monitoring lets you know if the jobs are failing or succeeding.

Why you might need them

The scheduler service enables you to trigger custom code through HTTP or an Azure queue on a particular schedule. The key component is the scheduling capability and the bulk of the work happens in your own application. One benefit of this scheduler is that it is not tied to Azure-only targets since the HTTP endpoint you invoke can be any URL. This service is especially useful when your web application may be dormant due to inactivity and therefore not loaded into memory. I would also expect future releases to provide additional actions such as sending a message to Service Bus Topic or Queue though I do not if anything has been stated regarding future plans.  This offering will only get more useful as the type of actions increases.

 

Azure Mobile Services Scheduled Jobs

What are they

The scheduled jobs functionality in Azure Mobile Services is a targeted service that is all about context. The context of your mobile service back end to be specific. This scheduler lets you run jobs, defined as Node.JS JavaScript files, that run in the context of your specific mobile service. That means the scripts have access to your data, push notification settings and other configuration for your service and you can use the Mobile Services API within your script. You can create scripts and schedule them for execution on a recurring basis or simply define them for ad hoc execution later. The scheduling capabilities are not quite as robust as those in the Azure Scheduler Service but do allow you to setup your job to recur based on minutes, hours, days and months. You can log information, warnings and errors to the same log used by your mobile service.

Why you might need them

This scheduler was released in preview before the Azure Scheduler service and provided a solution for a common problem of needing back end code to run without user interaction or at specific times. You can manage the scripts for your scheduled job alongside your other code even putting them under source control using the GIT integration. If you are focused on building a mobile service only, then this option may  be your best choice as it will simplify the management of your application assets and management experience.

An alternative approach would be to use the Azure Scheduler Service which would call a custom API exposed by your Mobile Service. You might choose this option if you need the more robust scheduling capabilities of the Azure Scheduler Service or if you are already using that scheduler for other work. This approach requires either that your API permission enable everyone to invoke the custom operation or you need to provide your Azure mobile credentials in an HTTP header from the scheduler. At the time of this writing the management portal does not support defining custom headers for HTTP(s) actions. In order to define headers for your action you must use the REST API to create your job in the Azure Scheduler. I’m sure this will get added to the portal over time.

 

Web Jobs in Azure Websites

What are they

Web jobs enable you to upload a script  file (bash, python, bat, cmd, php, etc.) to your web application and run those scripts as your job. When you define the job you can execute it ad hoc, on a schedule or continuously. The continuous option is interesting because it will restart your job after the executable exits from each run. Another unique aspect of these jobs is that the continuous jobs run on all the instances of your website. Additionally, there is a Web Jobs SDK for .NET which provides quick and easy access to Azure Storage and queues so you can run your jobs when new items are added to storage or queues. This extends the reach of your job beyond the website with minimal coding effort required. Like the Azure Scheduler Service you get a history view of your job executions and can review the details of successes or failures as well as detailed logs if you used them.

Why you might need them

If you are building a website and need some code to run in the background without being triggered by user actions on the site or requiring a third-party service to invoke your API, then Web Jobs will fit the bill. Like the Azure Mobile Service scheduler these scripts have the benefit of running in the context of your website and can read configuration, work with files in the site directories, etc. The SDK is a nice addition and can really simplify your life if you are also going to work with Azure storage as part of your web job. One of the other benefits to using this model is the variety of supported script types which opens up the libraries and commands available in each of those environments.

Like with Azure Mobile Services, you could use the Azure Scheduler Service to invoke an HTTP request to your website and have an endpoint that handles your work. This may narrow your options in terms of the programming language used for the job, which presumably you are already using for your site, so you may not be able to take advantage of the libraries in other scripting/coding environments.

Continuous jobs, because they run locally on all instances of your site, are really one of the unique characteristics of web jobs in my opinion. Using an outside scheduler would generally enable you to invoke an endpoint on only one server.

 

Will we see more schedulers or a consolidation?

As I said earlier, it’s quite possible there are schedulers I don’t know about in area of Azure I don’t tend to use such as Hadoop or Media Services. It’s also possible we’ll see more schedulers or job engines come online for new or existing services. I think the cast of characters right now mostly makes sense and each provides some unique functionality. My hope is that there will be a logic to it all and the notion of running something on a schedule will be centralized on the Azure Scheduler Service and focus placed there to expand the scheduling options even more and to increase the targets with Azure specific options and actions for add-ons from other vendors.

 

What do you think? Is this scheduler/job overkill? Should there be one scheduler to rule them all?

Tags:

Azure

Materials from That Conference Azure Websites talk

by Matt Milner 3. December 2013 04:52

I’m a little late in posting these, but recently had a request from the folks at That Conference to post materials and help test out their website and app design for next year. 

So I have attached my slides from my talk on Azure Websites. If you attended, I hope you left with an understanding of the simplicity, power and manageability that Windows Azure Websites brings to scalable hosting for your website be it ASP.NET, Node.JS, PHP or another technology. 

Tags:

Azure | ASP.Net

WCF Data Services and Web API with OData; choices, choices.

by Matt Milner 2. April 2013 15:13

Back in 2010, I wrote a course for Pluralsight on OData which covers the protocol in general and introduces the viewer to the client and server programming model. This year, Microsoft released updates to the ASP.NET Web API which includes support for OData in the controllers.  Since this latest release, I’ve received several questions about the differences between these two toolsets for building services that support OData and some guidance on which to use.  This is my attempt to answer those queries. 

 

OData

OData is a protocol developed by Microsoft and others for data access using web protocols such as HTTP, ATOMPub and JSON. One of the benefits of OData is a consistent query experience, defined in the protocol, that enables rich querying using URI query string parameters. This consistent query syntax, much like ANSI SQL, provides a platform neutral API for working with data.

This means I might be able to write a query like this:

http://pluralsight.com/odata/Categories?$filter=Name eq 'OData' 

 

There are a variety of query string options you can use to filter and identify the resource(s) you want to read or update. I can use this same pattern to apply filters to other OData services using their entity properties.

 

WCF Data Services

WCF Data Services is Microsoft’s library for building OData Services  or writing OData clients.  On the server side, the framework provides a very quick, simple model for exposing all or part of an Entity Framework model as an OData compatible service with little or no code. This service, scaffolded in minutes supports, if configured to allow it, read, insert, update and delete. 

If you don’t have an Entity Framework model, you can expose a simple .NET object with IQueryable properties for read only access, or implement the IUpdateable interface and support update, insert and delete on any collection. 

This framework provides the quickest way to get a service up and running when the data model is the primary focus of your application. You do have the ability to extend the service with functions that are exposed over the HTTP API as well. For example, at Pluralsight we could have a method to return the top 10 courses. This might be a convenience to save the client from having to compute this themselves, or it might be because the data needed to make that distinction isn’t exposed in the data available in the service therefore the client couldn’t compute or filter to get those same results. 

On the client side, the WCF Data Services library provides a .NET interface over the OData protocol and exposes the query semantics as a LINQ provider.  This enables .NET developers to access the data in an OData service as they would any other data source.

Microsoft has been moving some OData features into the OData Library to enable reuse in many different scenarios.  This means you don’t have to accept the default WCF Data Services model, especially if you don’t have an EDM for your data source. 

You can, obviously, use the client and service independently.  That is, even if you develop your service using another framework, perhaps not even Microsoft, you can use the client library to access it.

 

ASP.NET Web API

The ASP.NET web API was introduced last year as a framework for building HTTP services; that is services that expose their functionality over HTTP (these may or may not be REST services). You build these services using controllers, much like ASP.NET MVC development for web applications.  The services are most often focused on exposing certain resources and enabling various actions on those resources.

One of the features of ASP.NET Web API is content negotiation. This enables a client to request a resource, a Course for example, and indicate (using the HTTP accept header) they would like the response in JSON, XML, or any other format. If the server can support the response type, it does so, serializing the data appropriately. 

It is only natural that customers would want OData JSON or ATOMPub as a format for exposing their resources, and would request support for the query syntax. The beauty of OData is that you don’t have to write umpteen methods for querying (GetCustomer, GetCustomersByCity, GetCustomersByRegion, etc.). So, using pieces of OData Lib, the Web API team enabled support of OData query syntax on an API controller method and enabling update semantics as well. 

 

Making the decision

Having said all that, I would summarize as follows: WCF Data Services focuses on the data model and limits code, while Web API focuses on the controller/code and enables the formatting and query syntax of OData.

So, if you are looking to expose a data model (EDM or otherwise) quickly and don’t need a lot of code or business logic, WCF Data Services makes that REALLY easy and would be a good starting point. 

If, however, you are building an API and simply want to expose some resources using either OData query syntax or formatting, then ASP.NET Web API is probably the best place to start. 

I hope this is helpful and have fun writing your services no matter what toolset you choose.

Tags:

ASP.Net | WebAPI | Windows Communication Foundation | OData

How I saved the day with Windows Azure Websites

by Matt Milner 2. April 2013 15:01

My wife does a lot of work volunteering at our school.  No, check that, she does a metric ton of work. The school was planning a silent auction to raise money for various programs. As part of this, the group decided to hold an online auction allowing people to bid on various activities offered by the teachers. The only technology available to the group was a CMS for creating web pages and HTML forms that would send email messages.  My wife was planning to respond to email and update the web pages manually a few times each day.

Well, as a developer, that just didn’t sound right to me. 

We worked together to quickly create a simple web application using ASP.NET Web API, jQuery, Knockout.js, SignalR, and Toastr to show the auction items and enable bidding. SignalR allowed all clients to get real time updates on the page. I was impressed with how quickly the site was functional and with great features thanks to these libraries.

The final problem was how to host this awesome website in a short amount of time? We didn’t need a huge amount of scale, or so I thought, but we needed to be able to handle whatever load we might get. Oh, and did I mention the whole point was to raise money? Even if we had a source of funds, we didn’t have time to get approval. 

Since I had recently done a course for Pluralsight on Azure WebSites, I knew the perfect solution.

I was able to provision and deploy the site in minutes using the free offering to test and was ready to scale to the shared or reserved instance easily in the portal should the need arise.

The dashboard on the Azure management portal gave me quick insight into how close I was to any limits, how much traffic the site was receiving, and even when there were a few HTTP errors.  Having the management portal on top of the deployment, plus the knowledge that the Windows Azure infrastructure was behind the site made everything run smooth, and put my mind at ease. 

The best part? On the last day of the auction we got to watch a bidding war in the last five minutes. Hundreds of dollars of bids were processed in those last few minutes which made a big difference in the total amount of money raised for the school. That never would have happened with HTML forms and email. 

Tags:

Azure

WebAPI or WCF?

by Matt Milner 28. February 2012 13:44

Updated [2/29/2012]: added more information on why HTTP and thus WebAPI is important.

I’ve been part of several conversations over the past few weeks where someone posited the question: Now that WebAPI is out, how do I (or my customers) decide when to use it or WCF? This question actually has many different flavors?

  • Is WCF done? Does WebAPI replace WCF? Should I stop using WCF HTTP?
  • Why is WebAPI part of ASP.NET? Wasn’t WebAPI originally a WCF framework?
  • If WebAPI is part of ASP.NET, why don’t I just use MVC? What does WebAPI give me over MVC?

 

Is WCF done?

WCF is not done, nor is it going away anytime soon. WCF is the framework to use to build services that are flexible with regard to transport, encoding, and various protocols. This was precisely what WCF was designed for and what it does extremely well. WCF enables me to write service code and contracts which can then be exposed over various bindings (transport, security, etc.). That hasn’t changed and continues to be the case. If you are building a service in your organization and plan to support multiple protocols, or simply use protocols other than HTTP (tcp, name pipes, udp, etc.) then WCF continues to be your choice.

If you happen to want to expose your service over HTTP with WCF you have two high level choices: SOAP over HTTP or web HTTP. Obviously SOAP over HTTP is simply a different endpoint/binding choice, again where WCF shines. You can also expose your service using the WCF HTTP model that has been around since .NET 3.5. This model changes the dispatching to happen based on URI templates and HTTP verbs rather than SOAP actions. The WCF HTTP model also provides some help in providing help documentation, surfacing faults in an HTTP friendly way (think status codes) and returning content in web friendly formats such as JSON. 

But, and there had to be a but, WCF was built as a transport-neutral fashion, that’s a selling point; except when you do care about the transport and really want to leverage HTTP for example.

 

Why is WebAPI part of ASP.NET and not WCF?

Somewhere during development WCF WebAPI became ASP.NET WebAPI.[1] Knowledge that this occurred is often what leads to the previous questions about the fate or uses of WCF. In my opinion, and this is just that, WCF as the backbone of WebAPI was not the best option because in order to care about HTTP you had to work around a lot of WCF infrastructure. Things the core Message abstraction were not built to embrace any transport and didn’t easily support (note I said “easily”) the various content types that might be negotiated.

When talking with colleagues and looking at what people are doing to build web APIs the most common choice was overwhelmingly NOT WCF. In fact, the top choices were either an open source platform or using MVC controllers to return JSON results to client pages. The reason, as I see it, is that all these platforms made it easier to get a web API up and running while allowing you close control over HTTP when you care. For someone simply trying to return some objects to a client as JSON within their MVC web application it is really simple to add a method to the existing controller and return that data. No configuration, no bindings, nothing but their models and existing controllers.

HTTP is important

Getting close to HTTP allows you to take advantage of the protocol. This means I can fully leverage features of HTTP such as caching, etags, status codes and the like. Why is this important? There are a variety of reasons but I’ll focus on a few. Caching GET requests is a huge part of HTTP and of scaling any web site/service. One of SOAPs big failings is that it relies exclusively on HTTP POST when using HTTP as a transport and so cannot take advantage of caching of requests, even if those requests are returning slowly changing or unchanging data. Getting close to HTTP allows me to set expiration headers easily on the response and control the caching of my content on the client intermediaries, etc.

Being able to work easily with ETags enables me to leverage conditional gets and manage application concerns such as concurrency. Status codes allow me to be explicit when responding to clients about what happened with their request.  As an example, when someone posts a new resource to my service I want to respond with success (2xx status code) but I also want to provide the right code indicating that the resource was created (201) and provide the location header so the client knows the exact URL of the resource just created. Being close to HTTP gives me the ability to send the appropriate status code and the appropriate headers so the client can get a richer response, all with the existing HTTP protocol.

 

It makes sense, when you care about HTTP, to use MVC . . . but MVC is not the best tool for building services either.

 

What does WebAPI give me over MVC?

ASP.NET MVC provides some great tools that could be leveraged for services including model binding and routing. For most people building web APIs, however, there are other concerns as well. As a simple example, I’ve always felt a little uncomfortable building services in MVC because of the standard routing model that includes the action in the URI. A little thing, sure, and something I could work around with some MVC extensions of my own. Web API provides me a model for routing based on HTTP verb rather than a URI that contains an action. This puts me close to the HTTP protocol, simplifies my routing and seems right to me. In addition, Web API allows me to fully leverage content negotiation to enable returning various representations of my objects/resources. This means I have a pluggable model for allowing the client to tell me what representation they would like (text/xml, application/json, text/calendar) and choosing the best formatter to create the best match representation. All this comes with the ability to use the routing, dependency resolution, unit testing, and model binding.

In addition WebAPI allows you to self-host your services a la WCF (and in fact uses a little WCF under the covers to enable this) so you can, if you choose, go outside ASP.NET / IIS as the host of your service and continue to leverage all these great benefits. This enables you to host your HTTP services in any  .NET appdomain and still use the same routes, controllers, etc.

 

So . . . ?

WCF remains the framework for building services where you care about transport flexibility. WebAPI is the framework for building services where you care about HTTP.

 

What do YOU think?

 

[1] To be exact, after the 6th preview release of WCF WebApi

Service Bus EAI and EDI capabilities released to labs environment

by Matt Milner 16. December 2011 05:15

For BizTalk folks, the release today of the EAI and EDI capabilities built on the Azure Service Bus represents one of the first major steps toward integration in the cloud. You can check out the blog post on the Windows Azure blog detailing the features, but essentially you have routing, mapping with lookup capabilities and EDI support in the cloud (that’s minimizing what you get here, but a general summary). There is also an SDK to enable you to create the required artifacts on your development machine and deploy them up to the Azure environment.

I’m excited to see new capabilities released on Service Bus, which is, in my mind, a major differentiator in the PASS space. Nobody has anything that comes close to the type of stuff Microsoft is doing here and plans to do on the Service Bus in the future. This, coupled with the announcement around BizTalk Server 2010 R2 and its support for the cloud means that the BizTalk space continues to be interesting and ever expanding. As anyone who does integration work knows, it’s not going away anytime soon, and it’s great to see Microsoft investing in both on-premise and cloud solutions to help customers integrate their disparate systems.

I’m looking forward to seeing this project grow and add new features over time, and even more to seeing how customers take advantage of these capabilities in the cloud.

Tags:

BizTalk Server | Azure | AppFabric

Blog has moved

by Matt Milner 14. December 2011 04:38

Well, as you might have noticed if you got redirected here, by blog has moved to this new location after being hosted over at Pluralsight for the past few years. This change happened rather quickly, so I’m in the process of moving all of the existing posts over into this new blog. Hopefully, your redirected requests will bring you directly to the post you are looking for, but if not, please search for them. 

I apologize for the inconvenience. I knew this change was coming at some point, which is why I had this blog setup, but the switch came rather unexpectedly and I’m scrambling to get the data migrated.

Tags:

Slides and demos from MDC 11

by Matt Milner 3. October 2011 04:03

Thanks to all who attended my talks on LightSwitch and jQuery templates / data linking at last week’s Minnesota Developers Conference. I’ve received several requests for the demo code and slides which I have included here as links.

Of note for those looking at the jQuery demonstration code, I updated the movie review linking sample to show how to update the rendered template content with the new values from the input form. True to form, this was only a matter of a few lines of code. 

 

Demo code:

LightSwitch asset manager

jQuery templates and data-link

 

Slides:

LightSwitch

jQuery

Tags:

Presentations | LightSwitch

Formatting results of Get-ASAppServiceInstance command in AppFabric

by Matt Milner 23. February 2011 07:40

I’ve been working with AppFabric lately and one of the things I like to do is use PowerShell to get a sense of the current state of my workflows. Unfortunately, I don’t like the format of the output as I find it hard to read.  You can see an example of the default output here.

What I want is a nice succinct table.  So, I fiddled around a bit with PowerShell’s format-table command and was able to get my data looking like this:

The command is pretty simple. I use the Property parameter to the format-table command to identify three properties that I want, using expressions to go after the specific values in which I’m interested.

get-asappserviceinstance -groupby status | format-table -property @{n='Count';e={$_.Count}}, @{n='Status';e={$_.Groups[0].GroupValue}}, @{n='Status';e={$_.Groups[1].GroupValue}}

The expressions go into the objects returned by AppFabrics get-asappserviceinstance command when grouping by status.

Hopefully this will prove useful to those of you working with Windows Server AppFabric and PowerShell.

Tags:

AppFabric