Skip to content
Jezz Santos edited this page Apr 22, 2017 · 9 revisions

(This documentation is pre-release)

RE: NuPattern Dying

Design Goals

(What we learned from the NuPattern project)

Don't be deeply integrated in any IDE (Design nor runtime experience)

  • Being tied to a specific IDE (such as Visual Studio Ultimate 2010, 2012, 2013) limited the developer audience considerably, and resulted in a VS only toolset. (Things have changed since 2009!)
  • Integrating with a specific IDE (such as Visual Studio Ultimate 2010, 2012, 2013) made the design tools constrained to what VS could do, and was seriously limiting to the contributor's ability to customize VSX (very difficult, rare knowledge set, and out of reach for most developers in the world). Also, VS releases forced us to upgrade to every VS version, when many of the extensibility points change between versions (some have been obsoleted even!)

We may choose to separate the authoring experience from the runtime experience.

  • Authoring - (defining the toolkit) the identification of the patterns and creation of the templates and assets, creating the commands
  • Runtime - (running the toolkit) running the commands, writing using the templates

We may choose for example to do authoring in the browser on a central SAAS site (that examines the code and teases out the patterns) and lets the author define the commands. That way everyone gets the same experience, and we stay out of the IDE's, and we will get a better UX there than in any one tool. No more managing various versions of various plugins to the various IDE's.

We may choose to use open source tools to do the [runtime] generation (i.e. automating yeoman, resharper, T4 templates etc) depending on the developer tool-chain and platform. Such as 'gulp' or 'npm' tasks, or other runners on the various platforms.

We might then store any generation artifacts and model configuration in the developers account (to be exportable when offline for running with tools).

Messaging & Positioning

"We are not a code generator, we are a code writer"

We need to be very clear with customers from the start that what they are going to get is a 'toolkit' that was derived from their code base. Not a general toolkit that someone else created for them, and in most cases not likely to be used by anyone else (although that is encouraged for solution providers who bake their IP into their toolkits to reuse across projects).

The code that will be written is code that they designed for their project - not decided by the code-generator.

We have to distance ourselves from the idea that this is another code generator technology.

Generated artifacts

In nupattern, we generated C# code (using various code-gen patterns) to separate the following kinds of code:

  • Code that was controlled by the toolkit (*.gen.any) - usually using partial classes.
  • Code that was created once by the toolkit, then controlled by developer. (*.any) - usually using partial classes paired with generated code

We will still need these kinds of files.

Let's call them 'twoway' and 'oneway' generation patterns.

We will then need to keep track in some manifest of what files are which, instead of polluting the source code with that info. (could also be in a single json file at root of project i.e. 'auto-mate.json')

Written Blocks

In nupattern, we generated whole files of code, and we had commands to handle everything else. This was a good start but in reality, many software patterns are known to the developer as a set of changes that need to be made in one or more files within the code base, depending on how the code base is structures and organized.

Therefore, when a pattern is repeated, the artifacts that are actually written to are (1) many, and (2) code/config snippets are also written into many places in the codebase.

Some of those artifacts will be newly created standalone files with a one to mapping to the instance of the pattern, but many will be code snippets injected into existing files at predictable places.

The developer is going to want to define both kinds to repeat their whole pattern.

The toolkit will need to know:

  • The collection of new files that are going to be created as a whole (defined by pattern)
  • The collection of existing files that are going to be updated by injecting a snippet (defined by pattern)

In most programming languages the code/configuration structures in each file are well known and vary enormously between languages. Understanding enough languages and configuration file structures is going to be a big challenge.

There are perhaps two strategies for managing where code can be injected:

  1. for the toolkit to understand the structure, hierarchy (and style rules of the developer), and what type of snippet is being written (i.e. a declared constant, versus method, versus delegate etc).
  2. the developer could leave a marker in the file (i.e. a structured comment) to signify the insertion point, which the toolkit would lookup.

For 1. We would need a framework that already understood many languages and woudl allow us to inject code into the file at referenceable places in the file. i.e. "add a constant declaration called 'SettingName' with value '34'". Or: write this text in constants section of the file "public const string SettingName = '34';" and then reformat the code.