Archive for the 'VBA Code' Category

Back, Briefly: Exporting XML From Excel Tables

It’s been a while since I posted anything to this VBA blog, since my main field of work is XML based documentation (see previous posts). However, I recently did an Excel VBA micro-project, which I think is worth sharing. I’ll describe the problem and outline the solution in this post, and add some details in a couple of further posts.

The overall aim is to provide a way of creating and editing software test definitions, while also making those definitions accessible to a test harness for actually executing the tests. (The solution I’m going to describe is, with appropriate modifications, more widely applicable than just for test definitions).

Since the test definitions fall naturally into a tabular form, Excel is an obvious front-end tool. The problem is how to make the data accessible to a testing harness. One way is to use Apache POI, a Java API for accessing Microsoft Office files – and our developer was keen to use this. However, there are benefits to having a plain text – that is, XML – representation of the data:

  • It can be stored (alongside the Excel workbook) in a version-control repository, and different versions differenced, using a plain text or (better) XML-aware diff tool
  • A body of test definitions can be analyzed and reported on using XQuery
  • Any scripting language with an XML library can access the test data

So, the requirement is for a VBA export facility that takes the test definitions in a workbook, and exports it as a single XML file.

Now, if the test definitions were simply a list of tests, it would be very simple – we might be able to use Excel’s built-in XML facilities. However, our test definitions are quite deeply hierarchical:

  1. Test Definition (for a system component or capability)
  2. Test Condition (a functional area to be tested)
  3. Test Scenario (a particular test configuration)
  4. Test (an atomic test)
  5. Test Step (one or more steps within a Test – typically only one)
  6. Test Parameter (property, value and purpose – in, out, environment)

#1 is represented by an entire workbook; #2 to #5 are represented by subsets of (contiguous) table rows; #6 is represented by a single cell in a table row (plus its associated header information).

#2 to #5 have hierarchical, numeric IDs, spread over the first 4 columns of a table. Where (typically) a Test has only a single Step, they are represented by a single row (that is, the Step is implicit). For example:

worksheet

  • 101 is a Condition,represented by rows 3:9
  • 101.2 is a Scenario,represented by rows 7:9
  • 101.2.1 is a Test, represented by row 8

(each Test here has a single implicit Step). The columns on the right-hand half of the table represent the Parameters. The greying-out of ID numbers is just conditional formatting.

The XML looks much like any XML:

XML

The point to note is that we have a six-level hierarchical structure, with each node in the hierarchy mapping on to a chunk of Excel table.

Now, the solution, in outline. We could pass through the tables and rows, spitting out XML strings as we go. However, this approach is rather limited, as there are a number of other things we might want to do:

  • check the test definitions for structural validity
  • generate output in other formats, such as plain text (really plain, not XML)

As earlier posts have illustrated, I’m keen on building object models that ‘sit behind’ the worksheets, and provide the application functionality. This seems to be a classic application for this. Each item in our test definition hierarchy is represented by an object (of the appropriate class), and is built from a chunk of worksheet table.

We start by creating a Test Definition object, and asking it to build itself. It goes through the worksheets and tables, finding the chunks that correspond to Conditions (using the ID level). It creates Test Condition objects, tells them to build themselves from the appropriate chunks, and adds them into a Collection.

Each Test Condition object goes through its chunk, finding the sub-chunks that correspond to Scenarios (using the ID level). It creates Test Scenario objects, tells them to build themselves from the appropriate sub-chunks, and adds them into a Collection. And so on, down the levels…

Having built the object hierarchy, we can do various things with it. Primarily, we want an XML string, which we can write out to a file. We simply ask the top-level Test Definition object for its XML string. This asks each of its Test Conditions for their XML strings, concatenates them successively, and wraps the result in its own Definition-level XML element (<tdefn> – see above).

Each Test Condition asks its Test Scenarios for their XML strings, concatenates them successively, and wraps the result in its own Condition-level XML element (<tcond> – see above). And so on, down the levels… A few other elements, such as descriptions, get mixed in, as we go.

Note that the object hierarchy is built anew, each time we invoke an operation – there’s no persistence other than through the table rows. This might seem wasteful, but there’s really no performance issue.

In the next post, I’ll say a bit more about the actual code involved, and I’ll say something about the self-checking functionality in a third post.

Multi-Area Ranges

We tend to think of a Range as a contiguous set of cells: a single cell, part (or all) of a column, part (or all) of a row, or a rectangular area. However, a Range is one or more or these. The contiguous areas of a Range are held in its Areas collection. If Areas.Count > 1 then we have a multi-area Range.

To construct a multi-area Range from several single-area ranges, you can use the Union operation, which is provided by the Application (so, effectively, global).

I came across an example of this in customizing a chart to show only the most recent N records in the source data. Here’s an accumulating table of monthly records:

I want a clustered column chart, with Months as the category (X) axis, and In/Out as the series.

If I wanted data from all records, I would set the chart’s source data range to be the entire table, including the headers, which Excel interprets to give the series names (that is, In and Out).

If I want my chart to show only the most recent 3 months (say), then I need to specify the source data range as the union of A1:C1 and A6:C8. Here’s the Source Data Range as it appears in the chart configuration:

=’Data 2010’!$A$1:$C$1,’Data 2010’!$A$6:$C$8

Here’s a procedure that constructs and sets the range for a given number of months:

Sub LastNMonths(nmonths As Integer)
'pre: 0 < nmonths
'  <= Range("A1").CurrentRegion.Rows.Count - 1

    Dim ws As Worksheet
    Dim firstrow As Integer, lastrow As Integer

    Set ws = Worksheets("Data 2010")

    lastrow = ws.Range("A1").CurrentRegion.Rows.Count
    firstrow = lastrow - nmonths + 1

    Dim headers As Range
    Set headers = ws.Range("A1:C1")

    Dim rng As Range
    Set rng = ws.Range("A" & firstrow & ":C" & lastrow)

    Set rng = Union(headers, rng)

    Dim cht As Chart
    Set cht = Charts("Chart 2010")
    With cht
        .SetSourceData rng
        .ChartTitle.Text = _
               MonthsTitle(ws, firstrow, lastrow)
    End With
End Sub

MonthsTitle simply constructs the chart title from the first and last months (from column A) – for example, “Report May-10 to Jul-10″.

As I did this initially in v2003, I have not made use of the ‘table-ness’ of the data – hence the Range(“A1″).CurrentRegion.

The procedure was called from a ‘Refresh Chart’ command button on a worksheet, with the nmonths value provided via an InputBox.

Before anyone points this out, let me say that you can do a certain amount of this kind of thing using formulas in the Chart configuration (using OFFSET, etc) – see Walkenbach’s ‘Formulas’ book. However, I think it’s clearer, and certainly more flexible to do it in VBA.

Dynamic Combo Box List

A combo box on a form can pick up the list contents from its RowSource property. This can be the name of a worksheet range – such as “products” (not ever a literal address, please).

However, this is tricky if the size of the list changes (if the list grows, specifically). With a standard defined Name, you need to keep on redefining the Name. Alternatively, you can use the OFFSET-function trick for dynamic ranges (but that is obscure and depends on things like the number of non-data cells in the column).

In 2007 we have the Structured Reference naming into Tables, such as “ProductTable[Product]“, which returns us the data contents of the product column. On a recent course, I was asked if we could use this as a Combo box RowSource property. The good news is that we can:

Any additions to the Table are picked up when the form reopens:

You could get this to work for a multi-column list (e.g. id + name, with the id as BoundColumn). For a 2-column table, you can specify the whole data area: “ProductTable[#data]“. It doesn’t matter if the #data area has more columns than the list needs: the leftmost ones are used.

Initially, I thought I’d try this with an ActiveX combo box dropped directly on to the worksheet. However, such on-sheet controls are actually OLEObjects. Rather than a RowSource property, they have a ListFillRange. An OLEObject could be things other than a List/Combo box, in which case this property would be inapplicable. (Similarly, RowSource is actually defined on the Control class).

That’s okay, but unfortunately, OLEObject.ListFillRange does not understand structured references into tables: it needs either a regular Defined Name or (don’t) a literal address. This seems like an oversight: the OLEObject‘s Parent is the Worksheet, which holds the Table as a ListObject – so it should be possible to resolve the name.

Nevertheless, it works fine for the Form combo boxes, and I’ll try to use this technique in the future.

Recursive Functions

I’ve been experimenting with XQuery, which is a ‘pure’ functional language: you just have functions and expressions. You iterate over collections of documents/nodes using a SQL-SELECT-like construct: For-Let-Where-OrderBy-Return (FLWOR). There are also quantified expressions: Some/Every-In-Satisfies.

There are no explicit loops, but you can write a function that is recursive - that is, the function calls itself, until a terminating case is reached (e.g the empty-list case). Each recursive call adds a frame to the call stack of the implementation; when the terminating case is reached, the calls unwind and the frames are popped off the stack, and the result value is constructed. The amount of stack space then limits how long the chain of recursive calls can be (unless the implementation optimizes recursion to iteration – some functional languages do).

Anyway, I wondered how many recursive calls you can make in VBA before you run out stack space (this is in Excel 2007). Here’s a trivial recursive function.

Function Recursive(n As Long) As Long
   If n = 0 Then
      Recursive = 1
   Else
      Recursive = 1 + Recursive(n - 1)
   End If
End Function

This works up to n = 6128. When n = 6129, I get Run-time Error 28 – Out of stack space. That’s a rather higher number than I was expecting. If you change the Long to an Integer, it works up to n = 6285, so the number and types of the arguments makes a slight difference.

It’s unlikely perhaps that you would write a straightforwardly recursive function like this, where a loop would do. However, it’s just conceivable that you might get a chain of object-to-object calls, within a custom data structure, long enough to cause an error.

Performance – Functions (addendum)

Following on from an earlier post, I thought that I’d quickly (!) try a different function. Not being a statistician, I wanted something with a simple formula, but that worked on potentially very large arrays: CORREL (correlation between two data sets) is something I can get my head around.

So I set up two columns of 1 million rows: column A containing 1,2,3,4,…; column B containing 2 * A * (1+ nnn), where nnn is a small, random, positive or negative quantity. For example, row 10 contains (10, 21.26344). The column ranges are named. A single cell contains a CORREL formula – this calculates in somewhat less than 1 second.

So I thought I’d try:

correlation = WorksheetFunction.Correl(arrX,arrY)

where arrX, arrY are 1-dimensional arrays obtained from the respective column ranges (whose Values are 2-D, N x 1 arrays). The above call is made between calls to Timer, and with all Application updating switched off. So far so good.

Times for 50000 rows are either 0.031 or 0.047 seconds – precisely those values, never anything in between; the mix seems to be random. For 65536 rows, the two alternative times are 0.047 and 0.063, again precisely, and randomly mixed. That’s pretty odd…

So, whack it up to the full 1 million rows, to get a decent elapsed time? Unfortunately not. In fact, 65537 rows causes a Run-time Error 13 – Type Mismatch on the call to Correl (as above). So something is limited to 16 bits, which is of course the storage for a VBA Integer. But my code does not use Integer anywhere. The actual data values are just Variants, indices are Longs. And the worksheet formula works fine for 1 million rows.

What’s going on? Any suggestions welcome. Given that I can’t get a benchmark for a very large number of rows, there doesn’t seem much point in writing a custom VBA Correl function to compare. Hmmmm.

Performance – Functions

In a post some months ago, there was a discussion  of the relative performance of equivalent functions: built-in worksheet versus custom VBA. Dougaj4 reckoned that the VBA function for Transpose is about twice as fast as the WS function. However, in my benchmark, the difference was swamped by Range Access. So I repeated the experiment with an in-memory array, comparing

WorksheetFunction.Transpose(arrin)

with Doug’s double-For-loop implementation. I tried it for a square array of size 1000 to 3000 (so 1 to 9 million elements), with Long values (=i/j).

Results:

  • 1000^2: WS = 0.72, VBA = 0.31
  • 2000^2: WS = 4.08, VBA = 1.27
  • 3000^2: WS = 9.42, VBA = 2.84

Here’s the graph:

So in fact the difference is greater: the custom VBA function is about 3 times faster. I wonder why? The function is pretty simple algorithmically, so it’s not as if there could be wildly different implementations. And there’s no significant initial overhead for the Worksheet function call: its line would pretty much go through (0,0).

I don’t know whether one could expect a similar difference in performance for other functions – say a really meaty statistical function.

Performance – Strings

In the previous post, we looked at the performance hit on accessing worksheet ranges from VBA. As is well known, it is advisable to minimize accesses by reading/writing arrays.

Another area where warnings abound is in operations on large strings. A particular case is building up long strings by concatenation. I expect we’ve all done something like:

mystring = mystring + newpiece

The overhead here seems to be because the old mystring is copied to a new mystring, and then the newpiece appended. The copying time is presumably proportional to Len(mystring).

For ad hoc string building (a range address, or name, or an SQL string in Access) this is not significant. Where problems could occur is when the concatenation is being done inside a high-iteration loop – that is, building up a very large string out of very many pieces.

Since mystring gets longer and longer, each concatenation gets slower and slower. The total time is therefore going to be proportional to n-squared, where n is the number of concatenations (i.e. the number of pieces). This is obviously bad news.

As a simple benchmark, I concatenated a single character string 100,000 times (on my fairly basic laptop), and it took 3.4 seconds; 200,000 times took 12.9 seconds. That’s a factor of 3.8, so roughly 4.

Now, that’s quite a big string to be building – not something you do every day, I’d guess. But you might be building up a string in memory in order to avoid large numbers of file-write operations, which are presumably going to be much slower than one write of a large string.

There are two alternatives. One is is to use the Mid function as a kind of ‘deconstructor’ function on the left-hand side of an assignment (this is a bit odd). This essentially ‘patches’ a substring on top of a base string. The trick here is that you start with an all-spaces base string of the anticipated size of your output string; this can be constructed using the (fast) Space function.

mystring = Space(size)
i = 1
Do While i < size
    substr = "xxx"   'or return from a function
    lensubstr = Len(substr)
    Mid(mystring,i,lensubstr) = substr
    i = i + lensubstr
Loop

Some people advocate using the ‘$’ versions of the string functions – so, Mid$, in this case. These are typed to take and return Strings, rather than Variants, and so are supposed to be more efficient. This does not seem to be the case with my string-building benchmark: the times are the same.

For a 100,000 character string, built one character at a time, it takes 0.05 seconds; for a 200,000 character string, 0.09 seconds. This looks like it might be linear, as you would expect.

Another alternative is to build an array of substrings, and then use the Join function to do a batch-concatenation:

mystring = Join(myarray)

This is about half as fast as the Mid solution (0.09, 0.17 seconds), but is still linear. The advantage of this is that it’s easier to retrieve/modify the substrings prior to the final string-build (with the Mid solution you’d need to keep an array of substring offsets).

Being good (!) object-oriented programmers, we should really encapsulate our chosen mechanism in a StringBuilder class (I expect that there are some out there). If the strings are being written to a file, then our class could handle this as well: a kind of wrapper around a TextStream object.

Another, less significant, optimization is checking for empty strings. Most people would compare with a literal empty string:

If mystring = “”

Since there is supposed to be an overhead in doing this (many times in a loop), alternatives are:

If Len(mystring) = 0

If mystring = vbNullString

The times for 10 million tests were 0.73, 0.36, 0.72 seconds. So the Len function is twice as fast, but it doesn’t really seem something to lose sleep over.

Finally, I’ve seen recommendations that when searching for a substring inside a larger string, you should use the InStr function, rather than Like “*foo*”. Times for 10 million tests were 1.56 and 2.55 seconds. So not hugely significant, and Like is obviously more flexible.


September 2014
M T W T F S S
« Dec    
1234567
891011121314
15161718192021
22232425262728
2930  

Follow

Get every new post delivered to your Inbox.