In an attempt to make my artifacts smaller, I recently chose to use 7-Zip compression, or archiveType: 7z, in an Archive Files task of an Azure DevOps YAML pipeline. 7-Zip provides 5 compression levels: 1 (fastest), 3 (fast), 5 (normal), 7 (maximum) and 9 (ultra), as well as option 0, which doesn’t compress anything.
I like YAML pipelines, because they live in my repo and I can use templates. However, the online editing experience is not particularly great. Sure, the online editor « assists » you when editing a simple file, but if you’re referencing templates, you’re out of luck (or at least I haven’t found a way to edit a template and use the « assistant ».)
Long story short, I’m editing a template (in a text editor) and I want to specify the compression level in my task. The official Archive Files task documentation page unfortunately doesn’t document the property that defines this. Luckily, a dummy pipeline and the « assistant » allowed me to quickly figure out that I wanted to set the sevenZipCompression property of the task. Of course, since the property wasn’t documented, neither were the values that it accepted. So with a little patience and using the task’s source code to confirm, I quickly came up with a table, which I decided to persist here for future reference:
7-zip compression levels for the Azure Pipelines Archive files task
A complete example of the task would look something like this:
About a month ago I saw the trailer for Tim Burton’s upcoming movie Miss Peregrine’s Home for Peculiar Children and it immediately caught my attention. It turns out it’s originally a book, so I went over to Amazon, downloaded the first few pages and ended up buying the whole thing. It’s exactly the kind of fantastic universe I like to immerse myself into.
The Kindle version that I bought included three novels actually: Miss Peregrine’s Home for Peculiar Children and its two sequels, Hollow City and Library of Souls. I liked the first one a lot, enjoyed the second one OK, but just wanted to get through the third one to see how the story ended. This seems to happen to me often with series that had a successful first book or first few books, but where you can feel that either pressure from success and/or from the editors drives the sequels rather than a well told story…
I already noticed from the trailer that the movie is a loose adaptation of the books. I just hope it tells the whole story (which can definitely be told in less than 2 hours), rather than try to capitalise on multiple films. But still, I have a feeling that it will be way better in my head, despite Burton’s talent.
I particularly enjoyed the fact that the author went through the trouble of making the books themselves peculiar by using unusual photographs to illustrate (and kind of drive) the story. If you like reading stories of adolescents with special powers running around the UK and through time, the books are really enjoyable and a quick read, so I strongly recommend them.
A colleague and I wanted to get two simple pieces of information: an event’s date and the corresponding message. Furthermore, we only wanted events that had happened today. Here’s what we came up with using LINQPad and the Azure Storage Driver.
First, make sure you’re using a table storage account as your database. In my case, my connection is called mycloudstorageaccount.
Then simply perform your query. If you want to copy/paste, here’s the text version:
from l in WADLogsTable
where l.PartitionKey.CompareTo(DateTime.Today.Ticks.ToString("d19")) > 0
DateTime = new DateTime(l.EventTickCount.Value),
Oh, and don’t forget to check out the SQL tab if you are interested in the « low level stuff. » It will show the actual URL that was used to query Windows Azure.
I promise, this is the last you will see on this subject today (from my part, anyway). It’s for those who were not online yesterday and/or are in a different time zone and/or didn’t see my post from yesterday and/or don’t speak French.
I’ve been working quite a bit with Windows Azure lately and particularly with Table Storage. I used to use SQL Server Mangement Studio to work with SQL Server and I found Azure Storage Explorer (screenshot on the left), which is actually pretty good for working with all three storage options: queues, tables and blobs.
What does it do? It allows you to add storage accounts as connections in LINQPad. It supports the local emulator as well as actual cloud storage. With every connection, you get all the right references as well as a typed data context based on the account’s tables.
I think I included everything you can expect from a typed data context, which will work like a charm with auto-completion if you have a LINQPad license. For anything that’s not already there, the data context exposes a TableClient property that is a CloudTableClient instance. That should be enough to do anything you want.
There’s also a couple of other « bonus » features, such as displaying the requested URL in the « SQL » tab of LINQPad:
If you work with Table Storage and LINQPad (if you haven’t adopted LINQPad yet, it might be the right excuse to test it), don’t hesitate to check the driver out. It would be nice if it got thoroughly tested and if I had feedback.