Service Studio 5.0 had a lot of improvements in the drag & drop department. Some of them are listed in this great video from Rodrigo Coutinho - New in 5.0 - Drag & Drop – but there are a lot more. Here are 3 of my favorites that aren’t included in that video:
If you drag an input parameter with an Entity Identifier type from the eSpace tree to a preparation or any other kind of action flow, a query to get the respective Entity will be created. This is very handy when creating edit or show screens
[New in 5.0.2] You can replace any terminal element with any other terminal element by dropping on top of it. This is especially useful when you want to replace the default end element of preparations or screen actions with a destination or a download element. What used to take three steps (remove the old element, drag the new one, and recreate the connector) can now be done with only one step!
[New in 5.0.2] You can drag variables on top of widgets to change their binding:
Remember, there are lot more behaviors like these ones, just try to drag things around to find them.
Expression Editor had a lot of improvements in Service Studio 5.0. You’ve probably already found most of them, but just in case you haven’t, here’s a list of some of them:
- The window is no longer modal so you can open several of them for comparison and copy/paste.
- Besides dragging from the scope tree, you can now also drag variables & actions set as functions from the eSpace tree.
- As you’re typing, the border of the text editor turns red when the expression is invalid, giving you immediate feedback when make a typo. Additionally, the case of variables and functions is automatically corrected to match its definition, so you have that extra degree of confidence that the TrueChange™ engine recognized what you just typed when you see it fix the case for you.
- You can now right click most elements in the scope tree and go to their definition:
- The completion list now includes the icons of the elements and tooltips about them:
- The filtering algorithm of the completion list is now smarter, and also matches word at the middle instead of just at the start:
- It also detects acronyms, so if you open up the completion list by pressing Ctrl+Space and then type “ni” it automatically selects the “NullIdentifier()” built-in function. In addition, if you don’t have the completion list open yet and just type “ni” followed by Ctrl+Space, it automatically inserts “NullIdentifier()” if there’s no other match as good. This works also for NullDateTime, CurrDateTime, and other variations.
A little more than two years ago, shortly after the Agile Platform version 4.1 was released, the engineering team at OutSystems joined together and did a self review on what we could do to improve our ability to deliver faster and with better quality. We already knew that the developers that used the OutSystems Platform were much more productive than the ones using .NET or J2EE, but we wanted to raise the bar even higher, taking their productivity to the next level. The current version of Service Studio had a few Achilles’ heels that were being hard to get rid of, in part due to some compromises that had to be done in the past to enable us to get the first version out as soon as possible. That legacy was starting to slow down our ability to deliver constant innovation in the agile software development field, so, after a series of brainstorming sessions, we decided that we would re-engineer some parts of the platform in order to enable us to be even more agile, and prepare ourselves to the great features we wanted to build in the future. And this was the start of what would later be called the Butterfly project.
While most of the R&D began the work on the Antelope version (4.2), two teams were spawned to work on the Butterfly version (5.0). While the Blue Morphos team initiated a deep analysis on the features we would want to deliver on the long term, the Red Bull team started immediately the work on the platform re-engineering that would enable us to deliver those features. This team started off with just two developers (myself included), growing to four developers shortly after. A year later, while 4.2 was approaching completion, another team joined us to work on the 5.0 killer feature – Business Process Technology – already using the new codebase. At the end, most of the R&D was involved, and with the hard work of 14 Developers (3 of them fully dedicated to Quality Assurance), 3 Product Managers, our product delivery manager and of course the support from all the other parts of the company, from marketing to support, this journey finally came to an end, and I think that the result is a great product. I’m really proud of being involved in it since the start, and I feel like a mother that has just gave birth to its new-born child: it was though, but now that it’s over all that matters is the marvelous new baby :)
So, what did version 5.0 of the Agile Platform brought us? Aside from the Business Process Technology which by itself is a groundbreaking step on the history of agile development, we were able to deliver a few other big improvements:
Ridiculously faster load and save times. For example, a 5.2MB .oml file that took 75 seconds to open and 20 seconds to save in 4.2, now takes 8 seconds to open and 2,5 seconds to save in 5.0. That amounts to 89,3% and 87,5% speed improvements respectively!!! Find, Find Usages, Merge & Clone are also now significantly faster.
Most applications crash from time to time, making you loose your work. A few of them them save backups periodically and allow you to recover most of it, but many times you still loose a big chunk. Service Studio 5.0 is the only application that I know of that automatically heals itself in case of errors, rollbacking the tiny action that caused the error and allowing you to continue your work as if no crash has happened, without loosing any work at all.
The new improved TrueChange™ engine is now omnipresent, immediately informing the developer of his typos & mistakes by the form of toolips, visual cues for warnings and errors, drop-down suggestions for common operations and a message pane that refreshes in real-time. In addition to that, all the system is now much more strongly bound, and now you can really assemble the pieces of your application like a puzzle, without fear of things getting unbound and loosing the strong checking abilities. Refactoring is now so easy that your application will be truly built-to-change.
There’s a lot more, so go and try the community edition, and let yourself be pleasantly surprised.
This is just the tip of the iceberg, as much of the potential of the new system is still not being used, so stay tuned for the next chapters. The next versions will be even more groundbreaking.
After you start to use extension methods and see all their benefits, you start thinking about how nice it would be to also have extension properties. When Scott Guthrie first announced them in New "Orcas" Language Feature: Extension Methods, someone immediately asked about extension properties. It just seems the feature is incomplete without both.
One particular area that would benefit greatly from extension properties would be WPF. Every time you define an attached property, you usually create a static getter and a static setter for it. If you could package that up in an extension property, attached properties would be much nicer to use.
There have been some hints that the C# language team is considering extension properties for the future, so maybe we'll get them in C# 4.0. But if you don't want to wait until 2010, you can have extension properties right now if you use F#. Check out Matthew Podwysocki's Object Oriented F# - Extension Everything.
Updated on 2008/09/17: Fixed problem when skipping elements. Updated on 2008/09/15: Fixed problem when trying to read missing attributes.
One of new features introduced in .NET 3.5 that I welcomed the most was LINQ to XML. The old DOM API was a bit clumsy to use, and the simple fact that you don't need owner documents any more makes the new XElement much more flexible and pleasant to work with than the old XmlElement.
Also new is an API for streaming XML output, XStreamingElement, that by using deferred execution gives you SAX-like performance on a DOM-like API. There's no streaming XML input API, though, so although you can now get away with not having to use XmlWriter any more, you'll still need to use XmlReader when you want good performance on large documents. During the LINQ to XML development, the XML Team considered such an API, but they decided not to do it for Orcas.
Fortunately, Ralf Lämmel proposed such an API in API-based XML streaming with FLWOR power and functional updates. I contacted him too ask if he could publicly release the code of his prototype, but he said he couldn't do it. Nevertheless, he kindly offered to help me develop a similar library myself, so with his help I implemented a small subset of the functionality he described in the paper. It took a while to get the corner cases right, but this is being used in a real-world scenario for some months now, so I think it's stable enough.
Although XElementReader looks a lot like a subset of XElement, you still have to remember that it's using a XmlReader underneath, so after you try to get to any child element, you have changed the reader position.
For example, if you have an element like <Root><A/><B/><A/><B/></Root> and call .Element("A") twice and then .Element("B") twice, the second call for B will return null. If you instead call .Elements("A") and then .Elements("B"), you'll get two A elements, but no B elements at all. So to do this right, you have iterate on .Elements() and check the .Name property to see if you're in an A or in a B element.
This read once nature sometimes also complicates debugging. To help with that, you can define XML_DEBUG_MODE to force XElementReader to use a XElement behind the covers instead of a XmlReader, so you can add watches freely without worrying about side effects. But remember to fully test with this conditional compilation symbol off.
Expression trees are one of the more powerful features of C#. They let you manipulate code in ways that almost remind you of LISP macros (but at runtime instead of compile time). Since I discovered them, I managed to eliminate almost completely the usage of reflection in my code, replacing it with much faster code using techniques similar to what Roger Alsing described in Linq Expressions - Creating objects. Expression trees also made possible something that I find myself using a lot these days: what Jomo Fisher described in Fast Switching with LINQ. It's a great example of the powerful things C# allows you to do with a little creativity. As I used it more and more, I collected a few modifications to the original code, so I though in sharing them here. I named it StaticStringDictionary.
The main difference of this version from the original code is that I don't assume that the key being looked up is in the dictionary. That forces me to call string.Equals at the end to check if the key is the correct one. If string.Equals returns false, a fallback function is used. This also invalidates the optimization in the original code that took advantage of different keys with the same value. I also made StaticStringDictionary<T> implement IDictionary<string, T> so it would be easier to adapt existing code.
Unfortunately I also stumbled over some problems with certain dictionaries, similar to the ones that Raptor-75 wrote about in the comments of the original article. After a few hours of debugging together with Rui Eugénio (and also with the help of Expression Tree Visualizer and StructsViz DebuggerVisualizer), we managed to fix all the problems we found. The comparer was changed to ensure the characters of indices already tested are ignored for the ordering, and in some situations the algorithm has to some backtracking.
Can Erten created a very useful expression tree to CodeDom converter. He wrote about it here: Expression Trees-Lambdas to CodeDom Conversion. I created a visual studio debugger visualizer based on it. You can find it in attachment. Just copy the assembly to My Documents\Visual Studio 2008\Visualizers and you're set. I also attached the full source code.
.NET includes a managed MSMQ API (System.Messaging) to send and receive messages, but unfortunately there's a key functionality missing: obtaining the number of messages currently in the queue. Yoel Arnon talks about this problem with more detail here and here. After reading that, we learn that there are three ways to overcome this limitation: using the MSMQ performance counters (with the .NET Performance Counter API or via WMI), using the MSMQ COM wrapper, or using the MSMQ Admin API directly. The first option is not very reliable and the second one is not available on all versions of MSMQ, so the third one is the preferable way. It's also the most complex. Jared Evans has explained how to do it using a C++ Managed wrapper here, but that approach adds some complexity to the deployment of your application. It forces you to redistribute the Microsoft Visual C++ Redistributable Package, and if you want to support the x64 versions of Windows, you have to build the C++ wrapper for both x86 and x64 and have additional logic in the application installer to know which version of your assembly to install (and the same for the Visual C++ Redistributable which also has two versions). Some months ago, I upgraded my home PC to Windows Vista x64, so at the time I decided to develop a C# only solution for this, as there wasn't any publicly available on the net.
The first approach I tried was to look at the mq.h file and translate all the required structures to C#. This proved to be a nightmare, as at the time the P/Invoke Interop Assistant hadn't been created yet and the structures proved to be a little complex. So instead of trying to translate the whole structures correctly, I turned on the debugger and started examining the fields that were really needed, defining dummy spacer fields for the rest. This took a while, but it was worth it. Here's the code:
Note that you'll have to go to the project properties and check the "Allow unsafe code" option for this to build.
After building and running, it should give you the following output:
Count should be 0: 0
Count should be 1: 1
Count should be 2: 2
Count should be 1: 1
Count should be 2: 2
Count should be 0: 0