Taoffi's blog

prisonniers du temps

Json object explorer

[json objects =>to property bags =>to objects]

Reducing dependency between clients and services is a major common question in software solutions.

One important area of client/server dependencies lies in the structure of objects involved in exchanged messages (requests / responses). For instance: a new property inserted to an object on service side, often crashes the other side (client) until the new property is introduced on the involved object.

I previously posted about loose coupling through property bags abstractions.

My first approach was based on creating a common convention between service and client which implies transforming involved objects into property bags whose values would be assigned as needed to business objects at each side on runtime. That still seems to be a 'best solution' in my point of view.

Another approach is to transform the received objects (at either side: server/client) into property bags before assigning their values to the related objects.

This second approach is better suited for situations where creating a common convention would be difficult to put in place.

While working on some projects based on soap-xml messages, I wrote a simple transformer: [xml => property bags => objects].The transformer then helped write an xml explorer (which actually views xml content as its property bag tree. You can read about this in a previous post).

Another project presented a new challenge in that area, as the service (JEE) was using Json format for its messages. In collaboration with the Java colleagues, we could implement the property bag approach which helped ease client / server versioning issues.

A visual tool, similar to xml explorer, was needed for developers to explore json messages' structures. And that was time for me to write a new json <==>-property bag parser.

The goal was to:

  • Transform json content to property bags
  • Display the transformed property bag tree

Using Newtonsoft's Json library – notably its Linq extensions – was essential.

Hereafter the global dependency diagram of the Json explorer app:

 

The main method in the transformation is ParseJsonString to which you provide the string to be parsed.

Its logic is rather simple:

  • A json string is the representation of either:
    • An object:
      • Read its properties (which may contain arrays… see below)
    • Or an array of:
      • Objects: read the array's objects
      • Arrays: read the array's arrays

 

Code snippets

The parse json string method

public static PropertyBag ParseJsonString(string jsonString)
{
    JObject jObj = null;
    PropertyBag bag = new PropertyBag("Json");
    ObjProperty bagRootNode;
    JArray jArray = null;
    string exceptionString = "";

    /// the json string is either:
    /// * a json object
    /// * a json array
    /// * or an invalid string

    // try to parse the string as a JsonObject
    try
    {
        jObj = JObject.Parse(jsonString);
    }
    catch (Exception ex)
    {
        jObj = null;
        exceptionString = ex.Message;
    }

    // try to parse the string as a JArray
    if(jObj == null)
    {
        try
        {
            jArray = JArray.Parse(jsonString);
        }
        catch (Exception ex2)
        {
            jArray = null;
            exceptionString = ex2.Message;
        }

        if(jArray == null)
        {
            bag.Add(new ObjProperty(_exceptionString, null, false) { ValueAsString = exceptionString });
            return bag;
        }
    }

    bagRootNode = new ObjProperty("JsonRoot", null, false);

    if(bagRootNode.Children == null)
        bagRootNode.Children = new PropertyBag();

    bag.Add(bagRootNode);

    if(jObj != null)
    {
        bagRootNode.SourceDataType = typeof(JObject);
        bagRootNode.Children = ParseJsonObject(bagRootNode, jObj);
    }
    else if(jArray != null)
    {
        bagRootNode.SourceDataType = typeof(JArray);
        bagRootNode.Children = ParseJsonArray(bagRootNode, jArray);
    }

    return bag;
}

 

Parse json array code snippet

 

private static PropertyBag ParseJsonArray(ObjProperty parentItem, JArray jArray)
{
    if(parentItem == null || jArray == null)
        return null;

    ObjProperty childItem;

    if(parentItem.Children == null)
        parentItem.Children = new PropertyBag();

    PropertyBag curBag        = parentItem.Children;

    foreach(var item in jArray.Children())
    {
        JObject jo     = item as JObject;
        JArray subArray = item as JArray;
        PropertyBag childBag;
        Type nodeType = subArray != null ? typeof(JArray) : typeof(JObject);

        childItem    = new ObjProperty("item", parentItem, false) { SourceDataType = nodeType };

        if (jo != null)
            childBag = ParseJsonObject(childItem, jo);
        else if(subArray != null)
            childBag    = ParseJsonArray(childItem, subArray);
        else
            continue;

        curBag.Add(childItem);
    }

    return curBag;
}

 

Json to Xml

As, now, we have the json content in property bags, we can almost directly get the xml equivalent (see screenshot below).

The used sample json string: for the following screenshot:

 

{
    "web-app": {
    "servlet": [
    {
        "servlet-name": "cofaxCDS",
        "servlet-class": "org.cofax.cds.CDSServlet",
        "init-param": {
            "configGlossary:installationAt": "Philadelphia, PA",
            "configGlossary:adminEmail": "ksm@pobox.com",
            "configGlossary:poweredBy": "Cofax",
            …
            …
            "maxUrlLength": 500
            }
    },
    {
        "servlet-name": "cofaxEmail",
        "servlet-class": "org.cofax.cds.EmailServlet",
        "init-param": {
            "mailHost": "mail1",
            "mailHostOverride": "mail2"
            }
    },
    {
        "servlet-name": "cofaxAdmin",
        "servlet-class": "org.cofax.cds.AdminServlet"
    },

    {
        "servlet-name": "fileServlet",
        "servlet-class": "org.cofax.cds.FileServlet"
    },
    {
        "servlet-name": "cofaxTools",
        "servlet-class": "org.cofax.cms.CofaxToolsServlet",
        "init-param": {
            "templatePath": "toolstemplates/",
            "log": 1,
            …
            …
            "adminGroupID": 4,
            "betaServer": true
            }
        }
    ],
    "servlet-mapping": {
        "cofaxCDS": "/",
        "cofaxEmail": "/cofaxutil/aemail/*",
        "cofaxAdmin": "/admin/*",
        "fileServlet": "/static/*",
        "cofaxTools": "/tools/*"
        },

    "taglib": {
        "taglib-uri": "cofax.tld",
        "taglib-location": "/WEB-INF/tlds/cofax.tld"
        }
    }
}



Screenshot

 

You may download the binaries here!

The source code is available here!

OneNote Explorer

In January 2004, Chris Pratley wrote about "OneNote genesis". His article ended by this sentence: "…I can see how this might become addictive."

Yes, as many people who know OneNote often use the word, "addictive" is a correct adjective for OneNote.

What makes it addictive is probably the fact that it is a medium for 'not-yet-documented' ideas. In the same time, it offers a good and simple hierarchical storage that helps organize those ideas for future documentation.

One great thing is that OneNote exposes its objects and methods for developers through an API for extending its features. There are many feature-rich add-ins for OneNote, which can fit your needs in several areas.

In my case, I needed a sort of 'periscope' to explore my own notes. A tool that can let me see my notes ordered by creation or last update dates, mark and retrieve some of them as favorite items, have a quick preview of a note, locate the section's file folder, search notes' titles and/or content… etc.

Using the API, I could write a 'OneNote Explorer'… a tool I started writing in 2010, enriching it with new features from time to time.

OneNote API

OneNote API is xml-based. A set of methods in the Application Interface let you get information about opened notebooks and their structures (section groups, sections, pages… etc.) in xml format.
Understanding OneNote xsd schema is thus essential.

OneNote XSD overview: main objects

 onenote xsd

  • A Notebook is (similar to file folder) a sequence of:
    • Sections
    • And/or Section groups. Where a Section group itself is a sequence of:
      • Section groups
      • And/or Sections. Where a Section is a sequence of:
        • Pages.

OneNote Page definition

onenote page xsd

  • Apart from its attributes (see xsd elements above), a Page is a set of either:
    • An Image
    • A Drawing
    • A File
    • A Media
    • An Outline (similar to html main <div>) which is (somewhat simplified here) is a sequence of:
      • OE children (OneNote Elements). Each OE can be either:
        • An Image
        • A Table
        • Drawing
        • Or a sequence of:
          • T (text range)

Application overview

Application view model classes:

 application main classes

 

A static class (OneNoteHelpers) exposes several methods to communicate with OneNote API and create / update the view model objects as required:

 code map 1

Summary of methods exposed by the helper static class:

 onenote helpers

Features

  • View selected section pages. Search titles, sort the datagrid, add page to favorites, open page in OneNote, html preview

feature 1

  • View all notebook pages. Search titles, sort the datagrid, add page to favorites, open page in OneNote, html preview

feature 2

  • Search selected notebooks

feature 3

  • Manage favorites: delete, preview, open in OneNote…

feature 4

Page preview note

The application's Preview button displays the html content of the selected page. As OneNote API can return the page xml content, an xslt style sheet (with templates per each element type of the xsd definition) allows a simple and quick preview.

sample page 

The simple page's xml tree

sample page's xsd

The page xml code

 
<?xml version="1.0" encoding="utf-8"?>
<one:Page xmlns:one="http://schemas.microsoft.com/office/onenote/2013/onenote"
         ID="{138E40BA-13BB-4D24-A78C-D92E4E23D574}{1}{E1949424590587215702781963951539196781170461}"
         name="Sample page title" dateTime="2018-04-20T18:53:38.000Z"
         lastModifiedTime="2018-04-20T19:00:30.000Z"
         pageLevel="1"
         isCurrentlyViewed="true"
         selected="partial"
         lang="en-US">
<!-- ***************** quick styles ***************************** -->
<one:QuickStyleDef index="0"
                     name="PageTitle"
                     fontColor="automatic"
                     highlightColor="automatic"
                     font="Calibri Light"
                     fontSize="20.0" spaceBefore="0.0" spaceAfter="0.0" />
 
<one:QuickStyleDef index="1"
                     name="p"
                     fontColor="automatic"
                     highlightColor="automatic"
                     font="Calibri"
                     fontSize="12.0"
                     spaceBefore="0.0" spaceAfter="0.0" />
    
<!-- ********** page settings ********** -->
<one:PageSettings RTL="false" color="automatic">
<one:PageSize>
<one:Automatic />
</one:PageSize>
<one:RuleLines visible="false" />
</one:PageSettings>
    
<!-- ********** title ********** -->
<one:Title selected="partial" lang="en-US">
<one:OE author="taoffi" authorInitials="T.N."
            lastModifiedBy="taoffi"
            lastModifiedByInitials="T.N."
            creationTime="2018-04-20T18:53:46.000Z"
            lastModifiedTime="2018-04-20T18:53:46.000Z"
            objectID="{9DB0692F-3758-49BC-8E87-F040F40599C2}{15}{B0}"
            alignment="left"
            quickStyleIndex="0" selected="partial">
<one:T><![CDATA[Sample page title]]></one:T>
<one:T selected="all"><![CDATA[]]></one:T>
</one:OE>
</one:Title>
    
<!-- ********** page content (main <div>) ********** -->
<one:Outline author="taoffi"
             authorInitials="T.N."
             lastModifiedBy="taoffi"
             lastModifiedByInitials="T.N."
             lastModifiedTime="2018-04-20T19:00:28.000Z"
             objectID="{9DB0692F-3758-49BC-8E87-F040F40599C2}{30}{B0}">
<one:Position x="36.0" y="86.4000015258789" z="0" />
<one:Size width="123.9317169189453" height="14.64842319488525" />
<one:OEChildren>
        <!-- ********** paragra^ph ********** -->
<one:OE creationTime="2018-04-20T18:53:47.000Z"
             lastModifiedTime="2018-04-20T18:53:52.000Z"
             objectID="{9DB0692F-3758-49BC-8E87-F040F40599C2}{33}{B0}"
             alignment="left"
             quickStyleIndex="1">
         <!-- ********** paragraph text ********** -->
<one:T><![CDATA[Sample page text]]></one:T>
</one:OE>
</one:OEChildren>
</one:Outline>
</one:Page>

 

OneNote 2010 vs. 2013 and above

There are some compatibility issues between OneNote API for 2010 version and 2013 and above. More changes have been introduced in 365 version.

The downloadable binaries here are for OneNote 2013 and 2016 desktop.

You may download the binaries Here!

Back to earth – Xamarin.Forms Cuvée 2017

After some months spent on other project types, so glad to find back Xamarin.Forms and to see how it evolved (matured) with the integration in Visual Studio 2017.

The new Xamarin.Forms project template is awesome and pedagogic. You can start building an entire professional solution around the basic objects delivered within that template!

This 2017 spring (well: it is autumn elsewhere!) other interesting projects also blossomed. Don't miss Grial 2.0 of UXDivers. A great ergonomic framework for Xamarin.Forms.

Those guys @UXDivers are unique! (Will talk about this later)

The new Xamarin.Forms template base objects

Inside the main portable project, you will find the 'Helpers' folder containing some interesting foundation objects, of which, the ObservableObject class.

ObservableObject is the base class of another delivered sample object 'BaseDataObject' of which derives the 'Item' object… another delivered sample object.

 

The relationship between those objects are shown in the following class diagram:

 

That looks to be a great and mature foundation. Deriving your objects from this simple architecture would allow you, among other benefits, to have a reliable property change notification mechanism (See my last year's post about this question)

 

Great… But!

As often among our developer community, we may agree on an architectural pattern and disagree on parts of its implementation.

In the new Xamarin implementation, a class calls its parent to set a property value. The parent then sets the new value and notifies the property change.

 

Here is the base class's (ObservableObject) code to set a property value:

 

protected bool SetProperty<T>(ref T backingStore, T value,
             [CallerMemberName]string propertyName = "",
             Action onChanged = null)
{
     if (EqualityComparer<T>.Default.Equals(backingStore, value))
         return false;

      backingStore = value;
     onChanged?.Invoke();
     OnPropertyChanged(propertyName);
     return true;
 }

 

 

A derived class can then use this:

 

string description = string.Empty;

public string Description
{
     get { return description; }
     set { SetProperty(ref description, value); }
}

 

 

In the above example, the property change notification is executed using the caller name (thanks to CompilerService.CallerMemberName attribute) unless the name is explicitly specified.

 

Note: msdn documentation

An Empty value or null for the propertyName parameter indicates that all of the properties have changed.

 

In so many case, a property change may require notifying the change of one or more of other object properties. A simple example would be: when you change the 'Birth date' of a person object, which requires notifying the change of his or her Age property as well.

In fact, an object's property value is, in most cases, linked to business rules and is better handled at the object's level. The property change notification, as its name suggests, is related to notifying external objects who may be interested by the value change of that property.

Summarizing this need by a SET mechanism can be good but does not seem to be an all-purpose solution.

We still need to complement this with the 'old' notification mechanism (based on Expressions) exposed in the pattern of the abovementioned post (which, itself, derives from other community knowledge as I mentioned!)

 

[This post corrects and clarifies points mentioned in a previous post]

Comparable sets

When we manipulate various collections of items, you often need to compare them.

Functionally speaking, comparing two collections (set1 and set2), in its simplest form, may mean:

  • Get the collection of items in set1 that are not in set2 (or vice versa)
  • Get the collection of identical items
  • Get the collection of different items.

 

In my case, I needed to obtain such compared collections in several projects (Xml files, Open Xml packages, swagger files, ms build project settings… I talked about in previous posts).

Several interfaces that come with .net may be of help in many contexts (think about IComparable, IEquitable… etc.). Though quite efficient for sorting and equality tests, they do not seem to be the right choice for solving this particular need for comparing collections.

 

For a collection to produce what we need (identical items, different items and items not in 'another' collection), its items must be able to provide answers to several basic questions.

For an item to tell if it is 'different' from or 'identical' to another, they must first be 'comparable'.

In the case of an Xml node for instance, we might consider that two nodes having the same name are comparable. They then can be different if their values are different. And can be identical if their values are identical.

Let us assume an object that provides such answers:

 

public interface IComparableItem
{
   bool IsComparable(IComparableItem other);
   bool IsDifferent(IComparableItem other);
   bool IsIdentical(IComparableItem other);
}

 

And a generic collection of such items

public class ComparableSet<T> : List<T> where T: IComparableItem

 

In such collection, we can then extract items matching our needs:

 

// items not in the other collection
public List<T> ItemsNotIn(ComparableSet<T> otherSet)

{
   List<T>      list   = new List<T>();

   if(otherSet == null)
      return list;

   foreach(var item in this)
   {
      // get an item of the other collection that is comparable to this item
      var comparable   = otherSet.FirstOrDefault(i => i.IsComparable(item));

      if(comparable == null)
         list.Add(item);
   }

   return list;
}

 

Sample implementation - The comparable item

 

public class ComparableSetItem : IComparableItem
{
   public string Name      { get; set; }
   public string Value      { get; set; }

   public bool IsIdentical(IComparableItem other)
   {
      ComparableSetItem   obj   = other == null ? null : other as ComparableSetItem;
      if(obj == null)
         return false;

      if(obj == this)
         return true;

      return obj.Name == this.Name && obj.Value == this.Value;
   }

   public bool IsDifferent(IComparableItem other)
   {
      ComparableSetItem   obj   = other == null ? null : other as ComparableSetItem;

      if(obj == null || obj == this)
         return false;

      return obj.Name == this.Name && obj.Value != this.Value;
   }

   public bool IsComparable(IComparableItem other)
   {
      ComparableSetItem   obj   = other == null ? null : other as ComparableSetItem;

      if(obj == null || obj == this)
         return false;

      return obj.Name == this.Name;
   }
}

 

The comparable generic collection

 

public class ComparableSet<T> : List<T> where T: IComparableItem
{
   // items not in the other collection
   public List<T> ItemsNotIn(ComparableSet<T> otherSet)
   {
      List<T>      list   = new List<T>();

      if(otherSet == null)
         return list;

      foreach(var item in this)
      {
         var comparable   = otherSet.FirstOrDefault(i => i.IsComparable(item));

         if(comparable == null)
            list.Add(item);
      }

      return list;
   }

 

 

 

   public List<T> IdenticalItems(ComparableSet<T> otherSet)
   {
      List<T>      list   = new List<T>();

      if(otherSet == null)
         return list;

      foreach(var item in this)
      {
         var identicals   = otherSet.Where(i => i.IsIdentical(item));

         if(identicals == null)
            continue;

         foreach(var itemx in identicals)
            list.Add(itemx);
      }

      return list;
   }

 

   public List<T> DifferentItems(ComparableSet<T> otherSet)
   {
      List<T>      list   = new List<T>();

      if(otherSet == null)
         return list;

      foreach(var item in this)
      {
         var      diffs   = otherSet.Where(i => i.IsComparable(item));

         if(diffs == null)
            continue;

         foreach(var otherItem in diffs)
         {

            if(item.IsDifferent(otherItem))
               list.Add(otherItem);
         }
      }

      return list;
   }
}

 

A simple console method to test this implementation

 

public static void TestComparableSets()
{
   
ComparableSet<ComparableSetItem>   list1   = new ComparableSet<ComparableSetItem>();
   
ComparableSet<ComparableSetItem>   list2   = new ComparableSet<ComparableSetItem>();

   list1.Add(new ComparableSetItem() { Name = "name1",    Value = "value1" });
   list1.Add(new ComparableSetItem() { Name = "name2",    Value = "value2" });
   list1.Add(new ComparableSetItem() { Name = "name3",    Value = "value3" });
   list1.Add(new ComparableSetItem() { Name = "name4",    Value = "value4" });
   list1.Add(new ComparableSetItem() { Name = "name5",    Value = "value5" });

   list2.Add(new ComparableSetItem() { Name = "name1",    Value = "value10" });
   list2.Add(new ComparableSetItem() { Name = "name2",    Value = "value2" });
   list2.Add(new ComparableSetItem() { Name = "name3",    Value = "value30" });
   list2.Add(new ComparableSetItem() { Name = "name30",   Value = "value300" });
   list2.Add(new ComparableSetItem() { Name = "name40",   Value = "value40" });
   list2.Add(new ComparableSetItem() { Name = "name5",    Value = "value5" });
   list2.Add(new ComparableSetItem() { Name = "name50",   Value = "value50" });
   list2.Add(new ComparableSetItem() { Name = "name6",    Value = "value60" });

   var      list1NotIn2 = list1.ItemsNotIn(list2);       // {n4/v4}
   var      list2NotIn1 = list2.ItemsNotIn(list1);       // {n30/v30},  {n40/v40},
                                                         // {n50/v50},  {n6/v60}
   var      list1Diff2  = list1.DifferentItems(list2);   // {n1/v10},   {n3/v30}
   var      list2Diff1  = list2.DifferentItems(list1);   // {n1/v1},    {n3/v3}
   var      ident12     = list1.IdenticalItems(list2);   // {n2/v2},    {n5/v5}
   var      ident21     = list2.IdenticalItems(list1);   // {n2/v2},    {n5/v5}
}

 

Using the strategy pattern

The above implementation assumes that you have control upon the collection items structures and are able to change their code.

If that is not the case (like, for instance, when using objects defined in a third party's library) you may use the strategy design pattern in a way similar to the IComparer interface.

 

public interface ICollectionItemComparer
{
   bool IsComparable(object obj1, object obj2);
   bool IsDifferent(object obj1, object obj2);
   bool IsIdentical(object obj1, object obj2);
}

 

The generic collection may then be defined in a way similar to the following:

 

public class ComparableSet<T> : List<T> where T: class
{
   // sample items not in the other collection using strategy pattern
   public List<T> ItemsNotIn(ComparableSet<T> otherSet, ICollectionItemComparer comparer)
   {
      List<T>      list   = new List<T>();

      if(otherSet == null)
         return list;

      foreach(var item in this)
      {
         var comparable   = otherSet.FirstOrDefault(i => comparer.IsComparable(item, i));

         if(comparable == null)
            list.Add(item);
      }

      return list;
   }

Revisiting the Trie –applied to text search and indexing [.net]

A Trie is an efficient tree structure for storing and searching hierarchically-structured information.

For more detailed information you may have a look Here, Here or Here… to mention a few.

In this article, I take 'Text' as the information to be handled.

Text can be viewed as a tree of 'character' nodes. Each set of these nodes compose a 'Word' building block. Several 'Words' building block may share a same set of character nodes.

Let us look at words like: trie, tree, try, trying. All of them share the 't' and 'r' root nodes. 'try' and 'trying' have their 't', 'r', 'y' as common nodes. This can be presented in the following simple diagram:

Or, in a more graphical presentation, like the figure below (green nodes represent end of words):

 

In the case of Text, the efficiency of a Trie is that any word of a given text will start with one of the known alphabetical characters of the language used. Which represents a relatively limited number of nodes.

To enhance our tree structure, we may choose to compose our root nodes with only characters used in the manipulated text (i.e. dynamically compose our root character dictionary).

Trie classes

The following class diagram presents the main Trie model used in the application code:

 

  • CharDictionaryItem: stores one character information (its 'char' value… from which we can get its int value)
  • CharDictionary: the collection or CharDictionaryItem used as the root dictionary for all nodes
  • TrieNode: a trie node item. It refers to one of the above dictionary's nodes. Contains a IsEndOfWord flag that tells whether the node is the end of a word building bloc. And provides information related to its status and position in the tree (Children, Neighbors, IsExpanded…). Through its tree position and flags, a TrieNode object can provide us with useful information such as 'words' (Strings) found at its specific position.
  • TrieNodeList: a collection of TrieNode items.
  • Trie: is the central object. It contains a CharDictionary and a TrieNodeList. This class is implemented as a singleton in the current sample application.

Collection indexers

Collections (CharDictionary, TrieNodeList) expose a useful indexer that retrieves an item by its character. Those indexers will be used through the code for retrieving items.

TrieNodeList indexer:

 

public TrieNode this[char c]
{
get { return this.FirstOrDefault(item => item.Character == c); }
}

CharDictionary indexer:

 

public CharDictionaryItem this[char c]
{
get { return this.FirstOrDefault( item => item.Character == c); }
}

The sample scenario

The sample application proposes the following usage scenario:

  • User selects a text file
  • Parse the file's contents: get its words blocks
  • Compose the root character dictionary
  • Build the words tree
  • Display the presentation tree + additional search functionalities

Parsing text – sample code

Parse text words (having a minimum length: (= 3 chars in the sample app))

 

int ParseStringTask(string str, int minWordLength)
{
// split text words
string[] words = str.Split(new string[]
{
" ", "\r\n", "(", ")", ",", ".", "'", "\"",
":", "/", "'", "'", "«", "»", "-", "+", "=",
"*", "[", "]", "{", "}", ";", "&", "<",
">", "|", "\\", "`", "^", "…", "\t"    
}, StringSplitOptions.RemoveEmptyEntries);

string word;

foreach (string w in words)
{
word = w.Trim();

if (word.Length >= minWordLength)
{
this.AddString(word); // add this word to the tree
}
}

return this._nodes.Count;
}

AddString method: creates the new dictionary items / adds the tree nodes of a string block (word)

 

public void AddString(string str)
{
char c = str[0];
CharDictionaryItem dicoItem;
TrieNode firstNode = _nodes[c],
node;
int ndx,
len = str.Length;

if(firstNode == null)
{
// find the dictionary item. add it if none found
dicoItem = GetDictionaryItem(c);
firstNode = _nodes.AddTail(dicoItem);
}

for(ndx = 1; ndx < len; ndx++)
{
c = str[ndx];
node = firstNode.Children[c];
// find the dictionary item. add it if none found
dicoItem = GetDictionaryItem(c);

if(node == null)
{
node = firstNode.Children.AddTail(dicoItem);
}

if(ndx >= len -1)
{
// set the End of word flag
node.IsEndOfWord = true;
}

firstNode = node;
}
}

Reading back the trie words

How to find back our parsed words through the trie?

Here is a sample code to find words located at a given node:

TrieNode.Strings property:

 

public List<string> Strings
{
get
{
List<string> list = new List<string>();
var childWords = from c in _children where(c._isEndofWord == true) select c;
List<string> neigbors = Next == null ? null : Next.Strings;
string strThis = this.Character.ToString();

// add words found in immediate neighbors
if(childWords != null)
{
string str = strThis;

foreach(TrieNode n in childWords)
{
str += n.Character.ToString();

if(n.IsEndOfWord)
list.Add( str);
}
}

// add words that may be found in child nodes
foreach(TrieNode childNode in _children)
{
foreach(string str in childNode.Strings)
list.Add(strThis + str);
}

return list;
}
}

 

User interface (WPF)

Trie nodes can easily be presented in a TreeView. According to the trie node flag, its corresponding tree view node should be able to indicate this (through different image in our example, using a Converter)

The Tree view item template may be:

 

<TreeView.ItemTemplate>
<HierarchicalDataTemplate ItemsSource="{Binding Children}">
<StackPanel Orientation="Horizontal" Margin="0" >
<Image Width="14"
Source="{Binding Converter={StaticResource nodeImageConverter}}"
Margin="0,0" />
<TextBlock Text="{Binding Character}" Padding="4,0"
Margin="4,0" VerticalAlignment="Center" />
</StackPanel>
</HierarchicalDataTemplate>
</TreeView.ItemTemplate>

 

Searching trie words

Fins words starting with a given string (Trie class)

 

public List<string> StartsWith(string str)
{
List<String> list = new List<string>();

char c = str[0];
TrieNode node0 = this._nodes[c];

if(node0 == null)
return list;

List<string> strings = node0.Strings;

foreach(string s in strings)
{
if(s.StartsWith(str, true, CultureInfo.CurrentCulture))
list.Add(s);
}
return list;
}

 

Fins words containing a given string (Trie class)

 

public List<string> Containing(string str)
{
List<String> list = new List<string>();

if(string.IsNullOrEmpty(str))
return list;

foreach(TrieNode node in this._nodes)
{
var strings = from sx in node.Strings where( sx.Contains(str)) select sx;

foreach(string s in strings)
{
list.Add(s);
}
}

return list;
}

 

Sample app screenshots

 

The sample code

You may download the source code Here.

Have fun optimizing the code and adding new features!

A MMXVI side walk: roman numerals

Numeral systems are fascinating!

Presenting values in various numeral systems reveals some hidden aspects of these values and sometimes reveals more about our Human History and knowledge evolution.

Roman is one of these systems. (you may have a look here, here, or here)

A few years ago, I wrote a method to convert decimal numbers into roman. That worked well. The customer wanted a converter for numbering paragraphs. Up to 50 would be largely enough, he said. The developer in my head pushed me up to 9999 (sadly, I abandoned here by lack of time)

A few days ago, I saw someone wearing a T-shirt with 'XCIV' logo. And that reminded me that I never wrote the reverse conversion (from roman to decimal).

That was a good occasion to write this. And as I also have a friend who wants to practice C#, that may be a good exercise.

I found back my old code (to discover it was all to be rewritten!)… and I started working on a new version!

Roman numerals. A brief presentation

As you may know, Roman numerals building blocks are:

Roman

Decimal

I

1

V

5

X

10

L

50

C

100

D

500

M

1000

 

Intermediate values (like in the table below, between 1 and 10) are additions and subtractions of these basic building blocks

Roman

Decimal

 

I

1

 

II

2

1 + 1

III

3

1 + 1 + 1

IV

4

5 – 1

V

5

 

VI

6

5 + 1

VII

7

5 + 1 + 1

VIII

8

5 + 1 + 1 + 1

IX

9

10 – 1

X

10

 

 

Going beyond the M (1000) [presentation and entries]

Roman numerals were, at their era, (an evidence) hand written (probably more often engraved on hard stones!).

That surely does not mean the people did not know or need to count beyond the 1000. We better never forget that numbers and mathematics are much older than our own era… and that most of the great advances in this area had been achieved in other older civilizations!.

My problem here is just a presentation question: how can I write roman numbers beyond the M (1000)?

Old traditionalists use quirky figures that do not seem easily writable using a 'keyboard'.

Like here:

Or here:

To make presentation and entries a little easier with our era's keyboards, I decided to combine other units to create building blocks beyond the 1000. Example: To present the 1000 – 9000 sequence, I used 'XM' and 'VXM':

 

"XM",

// 10000

"XMXM",

// 20000

"XMXMXM",

// 30000

"XMVXM",

// 40000

"VXM",

// 50000

"VXMXM",

// 60000

"VXMXMXM",

// 70000

"VXMXMXMXM",

// 80000

"CMXM",

// 90000

 

For the sequence 1m – 9m, I used characters that are not in the traditional roman building blocks: The 'U', 'W' and 'Y':

"U",

// 1000000

"UU",

// 2000000

"UUU",

// 3000000

"UW",

// 4000000

"W",

// 5000000

"WU",

// 6000000

"WUU",

// 7000000

"WUUU",

// 8000000

"UY",

// 9000000

 

Conversion processing units

The conversion manager object (iRomanNumberDictionary in the above figure) stores a list of roman building blocks.

Each building block corresponds to a decimal sequence factor (1, 10, 100, 1000… etc.) and stores the roman elements of this sequence (see the sample tables above). It also stores the roman group level (a vital info as you will see in the code!)

Decimal to roman

Now, to convert a decimal number into its roman presentation, we proceed this way (here, using 28 as an example):

  • Take the leftmost digit of the number (leftmost of 28 = 2)
  • Set the index of the sequence to look in = count of number's digits - 1 (for 28 that is 2 – 1 = 1)
    • Note: this target sequence is composed of: "X", "XX", "XXX", … etc.
  • Find the value of this sequence at the element index = the leftmost digit – 1 (2 – 1 = 1)
  • In our case, that would be "XX" (which is decimal 20)
  • Recurse call with the remaining digits of the number (remaining of "28" = "8")
    • Note: that call should return "VIII" (first sequence at the 7th position)
  • Add the string to the "XX" (that would then be "XXVIII")

 

Roman to decimal

Roman to decimal is a bit trickier!

Let us take the reverse conversion of the example above ("XXVIII") for which the conversion result should be 28.

  • The conversion step (a parameter) should be set to the highest index of our roman sequences (in my app, I used 7 sequence groups whose factors are: 1, 10, 100, 1000, 10000… 1000000)
  • We have to look for the string in all sequences whose group order is <= the current step
  • Do until we find something:
    • In our example: we look for "XXVIII" in all sequences whose group order is <= 7
    • As the string is not found in any sequence, we continue the search using the string minus 1 char "XXVII"… "XXVI"… "XXV"… "XX"
    • "XX" is found in the sequence whose decimal factor = 10 at number zero-index = 1
    • We store the following value in a List<int>: sequence's decimal factor * (number zero-index + 1). Which results in 10 * 2 = 20
  • Recurse call using the remaining string "VIII" and using the sequence group order – 1 as the conversion step. Add the returned number to our List<int>
  • Return the sum of integers of our List<int>

 

To better understand what goes on in the conversion process, I added a conversion history that explains the steps of each conversion.

Here is the processing history for XXVIII (28):

 

Another processing history for a greater roman number MMMXCVIII (3098):

 

You may download the code (WPF) HERE. Have fun extending and enhancing for the better!

Yet another [rather simple] OpenXML package explorer!

As its name implies, OpenXML is XML-centric technology (i.e. which extensively uses XML to describe the meta-model and contents of each and every element of a document).

OpenXML documents themselves (MS Word, Excel, PowerPoint…) are compressed packages that include numerous xml files which describe the document's meta-models and contents.

To explore these items, you can simply rename the document file to .zip and then navigate through this zip archive to view the elements that may be of interest.

After proceeding this way, you of course must not forget to rename the file back to its original extension (.docx, .xlsx… or whatever). When working on an OpenXML project, you obviously have to do this so many times a day, which becomes a somehow daunting task!

Some very good tools exist to explore OpenXML documents' structures… but most of them expose a logical view of the document rather than an xml-centric view. Which is of course very helpful… but does not apply when you just need to see the physical structure and the xml content of meta-models and elements.

Moreover, most of these tools are somehow out dated (2007-2009…) and many simply crash on most of our 'modern era' OSs!

Add some other good reasons to write another explorer:

It is august, it is raining and I don't know what to do this week end (while listening to Maria Joao Pires playing Mozart Sonatas!)

 

The purpose and tools?

What I need is:

  • Open an OpenXML document as a zip archive (without having to rename it to .zip!)
  • Explore the content of the archive (folders / files)
  • Explore / search the xml tree of xml files that may be found there

 

Adding a reference to System.IO.Compression and System.IO.Compression.FileSystem, you can (relatively easily) present the archive tree structure like in the following image:

I created a view model layer to map my needs to the zip archive objects of the System.IO.Compression namespace:

iZipEntryVM: a view model for zip archive entry which offers some useful properties related to my purpose.

iZipEntryVmList: a collection of the above object which offers static methods to create the tree of a zip archive file as well as a 'selected item' property and selection change event.

iOfficeVM: (singleton) entry point ('main view model' in a way) that orchestrates the basic commands and selection change events.

 

XML property bags explorer, again, to the rescue!

I wrote in a previous post about using PropertyBags to explore xml nodes.

The xml explorer objects, methods and controls written during this research are now packaged into an independent library that can be referenced in the current project.

What I have to do is just wire the selection event of an item in the archive tree to the xml explorer objects.

 

That is, when I click an xml file on the zip archive tree, the selection change event is fired. If the selected item is an xml content, it is opened and its stream is sent to the xml explorer object to display the node tree.

 

In practice:

  • On the TreeView control's selection changed event, the SelectedItem of the view model collection (iZipEntryVmList) is set.
  • The collection fires its own Selection changed event.
  • iOfficeVM subscribes to that event and send the entry stream to the xml explorer:

 

_myArchiveList.SelectedItemChanged += _list_SelectedItemChanged

private async void _list_SelectedItemChanged(object sender, iZipEntryVM selectedItem)
{
    ZipArchiveEntry    entry    = selectedItem.ZipEntry;

     if(entry == null)
       return;
 
    if(selectedItem.IsXmlFile)
        await Task.Run(() => OpenXmlFile(selectedItem));
}


 

async void OpenXmlFile(iZipEntryVM entryVm)
{
    ZipArchiveEntry    zentry    = entryVm.ZipEntry;
    var        bagVm    = XmlExplorer.Instance.PropertyBagVM;

    if(bagVm != null)
    {
        Stream            stream;
        await Task.Run(() =>
        {
            stream        = zentry.Open();
            // this will build the xml nodes' property bag tree
            bagVm.SourceStream = stream;
        });
    }
}



 

Apart from presenting the xml nodes' tree, XML explorer offers some other useful features:

  • Search for content values and/or node names
  • View / copy the xml string of a selected node
  • Export (save) the current OpenXML item to a file.

 

And, not less important, I no more have to rename a file to do that…!

Some screenshots

Explorer General View

Sample search results

 

Binaries and source code

You may download the binaries here

You may also download the source code here.

Convert a List<T> to List<something known only at runtime>

I fall into this while writing a data generator for sample objects.

Let us assume this:

 

public class SampleItem
{
public string name { getset; }
public SampleItem() { }
}

 

public class SampleClass
{
public List<SampleItem> ObjectList {  getset; }
public SampleClass() { }
}

 

Using Reflection, the data generator can easily see and assign a string value to SampleItem.Name property (notice: at runtime, our code does not know what SampleItem is)

For my purpose, it was important to insert some sample items into any List<T> property of an object.

In the above sample code that means, for a SampleClass instance, to generate a list of SampleItem to be assigned to the instance's ObjetcList property.

An elegant solution was found here. Its code is so simple:

 

public static object ConvertList(List<object> value, Type type)
{
var containedType = type.GenericTypeArguments.First();
return value.Select(item => Convert.ChangeType(item, containedType));
}

 

Example usage:

 

var objects = new List<Object> { 1, 2, 3, 4 };
ConvertList(objects, typeof(List<int>)).Dump();

 

That didn't work for my case. I could not assign the ObjectList property (I got an exception) because the resulting value generated by the code above was in fact a Lis<object>. It only converted the items contained into the list into SampleItem, the list itself was not a List<SampleItem> it was still a List<object>

 

The solution (less elegant, but still efficientJ):

 

static object ConvertList(IEnumerable<object> value, Type listType)
{
var containedType = listType.GenericTypeArguments.First();
var tmpList     = value.Select(item => Convert.ChangeType(item, containedType)).ToList();
var newList     = Activator.CreateInstance(listType);
MethodInfo addMethod = listType.GetMethod("Add");

foreach(var item in tmpList)
  addMethod.Invoke(newList, new object[] { item });

return newList;
}

 

That converts a List<object> and returns an object of the type expected by the property (a List<T>)

Swagger – Browsing REST service definitions

You probably know about Swagger?

Swagger:

OpenAPI Specification (originally known as the Swagger specification) is a specification for machine-readable interface files for describing, producing, consuming, and visualizing RESTful web services.

 

In a way, Swagger does what soap does for wsdl generated files.

It produces a .json file containing the given service information (data types, operations, parameters… etc.).

I came to know Swagger some days ago on a client site and that was quite useful to document defined services. The UI provided to access those information was quite poor though.

I looked for a more convenient UI for reading the provided .json, but could not find something available. I thus decided to write my own…. Which is the subject of this post.

 

I first thought it would be quite straightforward: just define objects that match the json structure / parse the json code into my objects… et voilà. I admit I was too optimistic!

The swagger.json schema

Let us first try to understand the structures defined in the generated .json file. I read many articles (some rather obscure!) about that, but through the work craft my understanding got betterJ. Here are the conclusions seen through a .net developer view:

Swagger json file is composed of:

 

A global (root) service definition, itself composed of (most significant elements for clarity)

  • A Dictionary of service paths:
      • Key = path url
      • Value = Dictionary of operations available at that address. Itself composed of:
        • Key = operation name
        • Value = the operation object. Composed of:
          • General info (name, description, summary… etc.)
          • Operation’s Verb (example: get, post, set, put…)
          • An array of Parameter objects:
            • Description
            • (In) = Where should it be located (example: in the request’s path, argument or body…)
            • Is required (bool flag)
            • Parameter’s data type
          • Operation Responses object = Dictionary of:
            • Key = response code (example: 200, default…)
            • Value = response object:
              • Description
              • Response data type

 

Remark: you may notice that operation parameters are defined as an array. I think they would better be defined as a dictionary as parameter names should be unique within the same operation.

After the paths node, the schema continues with another interesting node:

    • A Dictionary of service’s data types:
      • Key = data type name
      • Value = Dictionary of data type members:
        • Key = element name
        • Value = a data type object:
          • Type (example: object, string, array… may refer to a defined data type object)
          • Element data type (for objects of type array). Refers to a defined data type.

 

Here is a screen capture of significant nodes in a sample swagger json file. You may find more samples here.

 

 

The objects of the schema can be presented in the following class diagram (note that the ServicePaths dictionary Value is a Dictionary of string, iSvcOperation… I could not find a way to represent this relation in the diagram):

 

To parse the swagger json data, we will use the now famous NewtonSoft.Json library.

That needs us to add some attributes for the parse process to go right.

Example, in our root class iSvcDefinition:

The service domain names array should parsed, so we tell the library about its model name:

[JsonProperty("tags")]
public iSvcDomain[] ServiceDomainNames { get; set; }

 

The class contains a property that should not be parsed… so, we tell the parser to ignore it:

[JsonIgnore]
public List<iSvcDomain> ServiceDomainNamesList
{
    get { return ServiceDomainNames == null ? null : ServiceDomainNames.ToList(); }
}

So far, so good… we have objects that correctly represent the swagger json model (complemented by some view model properties in this example!).

We still have a problem: resolving data type references!

Resolving json references

Data types of elements in our swagger json file are specified as references to the related data-type-definition. For example, a Response that returns a ‘Product’ object is serialized as:

 

"responses": {

"200": {

   "description": "An array of products",

   "schema": { "type": "array",

     "items": { "$ref": "#/definitions/Product" }

   }

},

 

The full definition of the ‘Product’ data type is defined elsewhere, as:

"Product": {
"properties": {
   "product_id": {
     "type": "string",
     "description": "Unique identifier of the product."
   },
   "description": {
     "type": "string",
     "description": "Description of product."
   },
   "display_name": {
     "type": "string",
     "description": "Display name of product."
   },
   "image": {
     "type": "string",
     "description": "Image URL representing the product."
   }
}
}

So each item of ‘Product’ data type will tell the reference of that type (instead of duplicating definitions).

Resolving json references while parsing needs some craftingJ

In fact, the json parser does not resolve references automatically. That is part 1 of the problem. Part 2 is the fact that to find a referenced item, it should exists. That is: it should have already been parsed. Which requires the data type definitions to be at the beginning of the parsed file. A requirement that is, at least, not realistic.

 

As always, I searched the web for a sustainable solution. You will find many about this question, including “how to Ignore $ref”… which is the exactly the opposite of what we are looking for in this contextJ

The solution I finally used is:

  • Use the JObject (namespace Newtonsoft.Json.Linq) to navigate as need through the swagger json nodes
  • Start the parse process by:
    • Deserializing the swagger "definitions" node which contains the data types definitions
    • Store data types into a Dictionary: key = type id (its path), Value = the data type object
  • Use a custom resolver (a class which implements the IReferenceResolver (namespace Newtonsoft.Json.Serialization) to assign the related data type object each time its reference is encountered.

 

Here is the essential code-snippets for resolving references:

// define a dictionary of json JToken for data types
internal static IDictionary<string, JToken> JsonDico { get; set; }

// a dictionary of data types
IDictionary<string, iSvcTypeBase> _types = new Dictionary<string, iSvcTypeBase>();

// create a JObject of the file’s json string
JObject jo = JObject.Parse(jsonString);

// navigate to the definitions node
Var typesRoot = jo.Properties().Where( i => i.Name == "definitions").FirstOrDefault();

// store dictionary of type_path / type_token
if (typesRoot != null)
    JsonDico = typesRoot.Values().ToDictionary( i => { return i.Path; });

 

 

Now, we can build our data-type-dictionary using the JTokens:

foreach(var item in JsonDico)
{
   // deserialze the data type of the JToken
   iSvcTypeBase svcType = JsonConvert.DeserializeObject<iSvcTypeBase>(item.Value.First.ToString());

   // add the data type to our dictionary (reformat the the item’s key)
  _types.Add(new KeyValuePair<string, iSvcTypeBase>("#/" + item.Key.Replace(".", "/"), svcType));
}

 

 

Our resolver can now return the referenced item to the parser when needed:

public object ResolveReference(object context, string reference)
{
    string                    id = reference;
    iSvcTypeBase      item;
    _types.TryGetValue(id, out item);
    return item;
}

 

Some screen captures:

 

 

You may download the binaries here.

Will post the code later (some cleanup requiredJ)

Back end – front end: Services loose coupling

Most mobile solutions are built and operate in a distributed architecture where services play a prominent role for providing data and applying business logic.

Let us take an example of a hotel reservation mobile app:

  • When the user specifies the desired target city, a back end service would provide the list of available managed/operated hotels at this place.
  • Once he or she specifies the desired stay period, the service would filter the above list to provide those having free rooms for that period.
  • When the user selects a room and applies for the reservation, the service would then register that reservation, proceeds to payment… etc.

It is hardly imaginable for such an app to operate independently from a back end service.

 

The price of distributed apps

In the previous example, the mobile app presents and manipulates several objects: the place (city/location…), the hotel, the room… etc.

Information about each of these objects is obtained from the service. The structures (metadata) of objects transmitted by the service should obviously match those handled by the mobile app.

In a typical (SOAP) situation, using Visual Studio for instance, your mobile app project would reference the service. Through the service's WSDL file, Visual Studio would generate the code for objects (classes) and operations defined by that referenced service.

If one of service's objects structure changes, you have to relaunch the process to update your app's service reference so that your objects' metadata be in sync with those defined by the service. If you don't do this, your app will simply crash at the first service call involving any unmatched object!

After each single change, deploying the solution's components (service / client app) become also a big hurdle.

In a few solutions, that may be desirable (or required). In most, the hurdle surpasses the benefits. It is hardly sustainable in a large solution. Actually it is not even sustainable when you are the sole developer of both components!

Resorting to use REST instead of SOAP (you may combine both) does not solve the difficulty, as that only changes the serialization format.

 

Using one single object

Let us imagine a service whose operations return always the same object. That would be quite handy independently of that object's serialization format.

In several large solutions I came to explore during last years, aggregating services responses into an abstract object, say, ServiceResponse was a common design practice. A given service operation response may look like (wsdl):

<element name="Operation1Response">
    <complexType>
        <sequence>
            <element name="Operation1Output"    type="Operation1OutputObject"/>
            <element name="ServiceStatus"     type="ServiceStatus"/>
        </sequence>
    </complexType>
</element>

 

With this, you know that any service operation you may call will always return a structure containing your expected object along with a service status object.

That approach seems good. Still it is going half way in abstraction, because you still should know the exact structure (metadata) of the expected object. And hence doesn't solve the evolution and deployment hassle: if, for some reason, you change the service's object structure, all your consumers should sync this change before deployment. The deployment of a new service version requires the deployment of a new consumers' versions… big hassle = increased risks.

 

Back to property bags!

I previously talked about property bags (here and here…).

A short reminder:

  • Objects (as we know till nowJ) are structures composed of: properties, methods and events.
  • An object can be considered as a set of properties (a property bag)
  • An object property can be defined as:
    • A name
    • A data type
    • A Value
  • Property value can be either:
    • A primitive type (value type or whatever we may consider as 'primitive'… string for instance)
    • A composite type: an object itself containing a set of properties (=property bag)

 

 

With that simple structure we are able to represent virtually any object in a strongly-typed manner.

 

Using property bags, we can imagine a unique structure for our service response. Which may look like this (wsdl):

<element name="ServiceResponse">
    <complexType>
        <sequence>
            <element name="OperationOutput"    type="PropertyBag"/>
            <element name="ServiceStatus"     type="PropertyBag"/>
        </sequence>
    </complexType>
</element>

 

Now, when we call any service operation, we know in advance that it will return an output property bag, accompanied with a status property bag.

Each element of our property bag being strongly typed (specifies its data type), we can easily convert the received bag to the expected object.

That conversion itself can be subject to a set of business rules. But that would be implemented 'once' independently of service or consumer versions and deployment constraints.

 

This approach doesn't eliminate the need for a unified business model. It enforces its usage.

By creating a loose coupling mechanism between services and consumers it allows more separation of concerns and minimizes evolution and deployment risks.

Exercise: service side code sample

Let us define service response as:

[DataContract(Namespace = "")]
public partial class ServicePropertyBagStatus : ServiceMessage

{
    PropertyBagContainer    _responseOutpput;

    [DataMember]

    public PropertyBagContainer ResponseOutput

    {

        get { return _responseOutpput; }

        set { _responseOutpput = value; }

    }
    …
    …

Our response structure would then look like (wsdl):

<complexType name="ServicePropertyBagStatus">
    <complexContent mixed="false">
        <extension base="ServiceMessage">
            <sequence>
                <element name="ResponseOutput" type="PropertyBagContainer"/>
            </sequence>
        </extension>
    </complexContent>
</complexType>

 

Sample code of a service operation (for reading a customer data) may look like this:

[OperationContract]
public ServicePropertyBagStatus GetCustomer(int customerId)

{
    return SvcUtilities.LoadCustomer(customerId);

}

Our service utilities module method LoadCustomer:

public static ServicePropertyBagStatus LoadCustomer(int customeId)
{
    ServicePropertyBagStatus    status    = new ServicePropertyBagStatus();

    if(customerId == 0)
    {
        status.SetErrorMessage("No value provided customerId");
        return status;
    }

    // load the customer's info from database server

    Customer        customer    = Customer.LoadDbUser(customerId);

    // extract this object's data contract into a property bag container

    status.ResponseOutput = ExtractPropertyBagContainer(customer.GetType(),customer);

    // return the service response status

    return status;
}

Extract the object's property bag:

internal static PropertyBagContainer ExtractPropertyBagContainer(Type type, object obj)
{
    // get soap xml of the object's data contract

    DataContractSerializer serializer = new DataContractSerializer(type);
    MemoryStream stream = new MemoryStream();

    serializer.WriteObject(stream, obj);

    // parse the soap xml into a property bag container
    PropertyBagContainer container = PropertyBagContainer.ParseXml(stream, true, null);

    stream.Close();
    return container;

}

Some Property bag container helper methods:

public class PropertyBagContainer
{
    protected PropertyBag _bag;


    // parse an object into a property bag container
    public static PropertyBagContainer ParseObject(object obj, ObjProperty parent);

    // parse xml stream into a property bag container
    public static PropertyBagContainer ParseXml(Stream stream, bool parseSiblings, ObjProperty parent);

    // parse xml node tree into a property bag container
    public static PropertyBagContainer ParseXml(XElement xnode, bool parseSiblings, ObjProperty parent);
}

Client side code sample

Our service operations will now return a Property Bag Container containing the expected object's property bag. A helper static method of PropertyBag (AssignObjectProperties) allows us to assign its content to a specific object:

public static bool AssignObjectProperties(object targetObject, PropertyBag bag)

 

We can thus write:

public bool ParsePropertyBag(PropertyBag bag)
{

return PropertyBag.AssignObjectProperties(this, bag);

}

 

Assigning property bag contents to an object is done by assigning values of the bag items to object's properties having the same name and data type. Extra logic can be introduced here according to your business requirements (for instance: checking object integrity through specific attributes).

 

Now, on the service consumer application, assume we have a service that is referenced as CustomerService. We may call the service's LoadCustomer operation like in the following code:

public bool LoadServiceCustomer(int customerId)
{
    CustomerService proxy = new CustomerService();
    var status = proxy.
LoadCustomer(customerId);


    // we may check the service status for errors here 
     ….

    // assign received properties to this object
    return this.ParsePropertyBag(status.ResponseOutput.PropertyBag);
}

Client side with Json

The process is the same in a REST configuration (You may see some details about WCF REST services here) as you actually will always receive a ServicePropertyBagStatus object independently of the transmission format (xml / json… etc.). Parsing the received response into a property bag container can be done using components like the .Net NewtonSoft.Json:

HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url);
string str = GetResponseString(request);
ServicePropertyBagStatus  response = JsonConvert.DeserializeObject<ServicePropertyBagStatus>(str);
PropertyBagContainer container    = response.ResponseOutput;

You can download the property bag binaries (std + portable) here.