21 stories
·
3 followers

Finding Unknown Bugs With Property-Based Testing

1 Share

There are many ways of testing your application or library. The test pyramid provides a good starting point to the most common types of tests—unit tests, integration tests, end-to-end tests, and manual tests. But there are other types of tests, like contract tests, load tests, smoke tests, and what we'll be looking at in this article—property-based tests.

Finding_hidden_bugs_with_property_based_testing

What's the Idea?

Property-based testing is where you test that your code satisfies a set of predefined properties, using a wide range of input. What's the difference between unit testing and other types of tests? That last part—using a wide range of input.

A library for property-based testing allows you to write a test that will accept randomly generated data. The idea is that the library runs your tests many times, each time with different data. The library is actually trying to make your test fail. We'll see an example below.

The difference with unit testing is that with unit testing, you are responsible for thinking about the input. If you forget a specific case that could make your test fail, you'll ship a bug. With property-based testing, you have more data fired at your code, and there's a higher chance that you will discover the bug.

On a side note, the original library for property-based testing was QuickCheck, written in and for Haskell. It was initially released in 1999, so it's not a new idea.

Let's Write Our First Test

Let's start simple and assume we want to write a method that can import a user into an existing system. In my case, I was writing an Orchard module and I needed to import a list of email addresses. I will simplify things here and only import one email address. I'll also remove some extra details that aren't important to get my point across.

I will be using .NET in this article. But if you're not using .NET, just search the internet and you should be able to find a library for your language. There are property-based testing libraries for many different languages. For .NET, FsCheck is the go-to library. It's written in F#, but we can use it for any .NET language.

Create a new class library and add the following NuGet packages:

  • FsCheck.Xunit
  • xunit
  • xunit.runner.visualstudio (because I'm working in Visual Studio)

Now, define the property of the system that we're implementing:

public class UserImportServiceTests
{
    [Property]
    public void ShouldImportValidEmail(string email)
    {
        var mockUserService = new Mock<IUserService>();
        var service = new UserImportService(mockUserService.Object);

        var result = service.ImportUser(email);

        result.Should().BeTrue();
        mockUserService.Verify(x => x.CreateUser(email));
    }
}

Then, in another class library, we can start implementing. In the spirit of TDD, we'll just make it compile first:

public class UserImportService
{
    private readonly IUserService _userService;

    public UserImportService(IUserService userService)
    {
        _userService = userService;
    }

    public bool ImportUser(string email)
    {
        throw new NotImplementedException();
    }
}

When we run our tests, we immediately see what we expect. They fail:

Now, let's implement it in what we think is a correct way:

public class UserImportService
{
    private readonly IUserService _userService;

    public UserImportService(IUserService userService)
    {
        _userService = userService;
    }

    private const string EmailPattern =
        @"^(?![\.@])(""([^""\r\\]|\\[""\r\\])*""|([-\p{L}0-9!#$%&'*+/=?^_`{|}~]|(?<!\.)\.)*)(?<!\.)"
        + @"@([a-z0-9][\w-]*\.)+[a-z]{2,}$";

    public bool ImportUser(string email)
    {
        if (!string.IsNullOrEmpty(email) && !Regex.IsMatch(email, EmailPattern))
        {
            return false;
        }

        _userService.CreateUser(email);
        return true;
    }
}

That should do it. We validate the email using a regular expression and then call the UserService if it validates. This UserService is a built-in service in Orchard but doesn't provide a way of importing users in bulk. That's the reason I was writing this UserImportService.

When we run our test, FsCheck will generate various sets of input data and use it in the test. In our case, FsCheck will generate random strings. We can see that FsCheck managed to make our test fail:

But it's failing because FsCheck is trying to enter values that aren't valid emails. It has tried using "\003wf" as an email address. Because that failed, it tried a more simple value. This is what is called shrinking. FsCheck succeeded in "shrinking" to the most simple value of "a" and the test still failed. But that defeats the purpose of our test because we want it to use valid emails.

What we need is a way of telling FsCheck to only use valid emails. This is where generators fit in. Generators are responsible for generating the random data that is injected into our tests. Luckily, FsCheck already supports the System.Net.Mail.MailAddress class so we can change our tests to look like this:

[Property]
public void ShouldImportValidEmail(MailAddress mailAddress)
{
    var mockUserService = new Mock<IUserService>();
    var service = new UserImportService(mockUserService.Object);

    var result = service.ImportUser(mailAddress.Address);

    result.Should().BeTrue();
    mockUserService.Verify(x => x.CreateUser(mailAddress.Address));
}

Notice how we now accept a MailAddress instance in our test. We still pass a string to our service because that is what is sent from the client to our UserImportService. Let's run our tests again:

Whoa! FsCheck created some crazy email address. That can't be right, can it? Well, it actually is because "com" is a domain just like "ncrunch.net" is. It's just a top-level domain and usually the maintainers of TLD's won't create email addresses in that domain. But they could. So let's change our code:

public bool ImportUser(string email)
{
    try
    {
        var mailAddress = new MailAddress(email);
        _userService.CreateUser(mailAddress.Address);       
    }
    catch (FormatException)
    {
        return false;
    }

    return true;
}

Now our test passes:

Let's take a moment to think about this. When we write a test, we're already thinking about the happy flow and the edge cases. Which means our tests and our implementation are more tied together than we would like to admit. If I had written unit-tests, I would also write a specific set of tests, and possibly not thought about certain cases. In the above case, I would have written a test for a valid email address and for an invalid email address. Or at least what I think is an invalid email address! I didn't know that foo@com is a valid email address.

In my real-world case, a similar bug made it to production and it only surfaced after a few months when the admin tried to import an email address like peter.Morlion@example.com (notice the capital M). This led to a bug report, which I reproduced in a unit test and then fixed. The special case is now covered. But with the property-based testing approach, I would have found this bug before it was released.

That's the advantage of property-based testing—FsCheck will "think" about edge cases that you didn't imagine. As I've mentioned above, it's trying hard to make your code fail. It's a good thing if it succeeds. We'd rather have our test find a bug than have a user find one.

Characteristics of Property-based Testing Tools

Let's back away from the code now. You might be thinking my test failed because I got lucky. Maybe this time the generator didn't enter any fancy emails. Or maybe it didn't use capital letters. That's a valid concern. Property-based-tests can only work if the generators do their job well. There could be something missing in a generator but I've shown that your unit tests can also miss certain cases. With a good generator your code will be tested many more times and with a lot more varying data than you can write in unit tests. So you're actually testing your code more thoroughly than with traditional unit tests.

There are some more points that we can look at, but that would take us beyond a simple introduction. Here is a summary of what a good property-based testing library must do:

  • Generate random data to use in your tests.
  • Show the most simple set of data that fails the test. We call this shrinking.
  • Allow you to create custom generators.
  • Provide a way of defining how many iterations of a test should be performed.
  • Allow filtering data from those generators.

FsCheck provides all of this and a lot more. The documentation is F#-centric but provides some useful C# examples too.

To finish, let's jump into that last point about filtering.

Filtering

We tested our code with valid email addresses. But we also need to test it with strings that aren't valid email addresses. To do this, we'll use the existing generator for strings but filter the result. With FsCheck, you need to register your generator in the static constructor of your test class. FsCheck then looks for static properties in the class you provided to the register method. An example will clarify this:

static UserImportServiceTests()
{
    // Register all static properties the return a generator in this class
    Arb.Register<UserImportServiceTests>();
}

// Return the string generator, but filter out anything that returns a valid email address
public static Arbitrary<InvalidMail> InvalidMailGenerator => Arb.From<string>().Filter(x =>
{
    try
    {
        Console.WriteLine("FOo");
        new MailAddress(x);
        return false;
    }
    catch (Exception)
    {
        return true;
    }
}).Convert(s => new InvalidMail(s), i => i.Value);

Notice how we register an Arbitrary<InvalidMail> and not an Arbitrary<string>. That's because in our generator, we'll be using Arb.From<string>. If we registered this as an Arbitrary<string>, FsCheck would use our new generator when we call Arb.From<string> inside our generator. This leads to a StackOverflowException. The InvalidMail class is a simple class that accepts a string and stores it in a Value property.

With this, we can now write a new test like this:

[Property]
public void ShouldNotImportInvalidEmail(string invalidEmail)
{
    var mockUserService = new Mock<IUserService>();
    var service = new UserImportService(mockUserService.Object);

    var result = service.ImportUser(invalidEmail);

    result.Should().BeFalse();
    mockUserService.Verify(x => x.CreateUser(invalidEmail), Times.Never);
}

This leads to another edge-case that I didn't think about: an empty string.

When an empty string is passed, an ArgumentException is thrown. In our UserImportService, we took FormatExceptions into account, but not ArgmentExceptions. So we'll have to change our code to this:

public bool ImportUser(string email)
{
    try
    {
        var mailAddress = new MailAddress(email);
        _userService.CreateUser(mailAddress.Address);
                
    }
    catch (FormatException)
    {
        return false;
    }
    catch (ArgumentException)
    {
        return false;
    }

    return true;
}

You could also catch the general System.Exception type, but I personally like to catch as specific as possible. Our tests now pass:

Property-based or Unit Tests? Both!

Our code now passes more test cases than I could think about. We started with a regular expression that wasn't sufficient. Then we used the System.Net.Mail.MailAddress class but didn't catch all possible exceptions. After using property-based tests, our code is now more robust than it would have been with unit tests. This is not to say that unit tests no longer have a place. It can be easier to control the exact input of a unit test and sometimes this is necessary.

Property-based tests aren't fully deterministic. But you should consider that many of your unit tests could probably be replaced with property-based tests. And that such a change would lead to more bugs being found earlier. It can be another tool in your testing toolbelt.

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Read the whole story
Share this story
Delete

Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown

1 Share

Over the last couple of days, there has been a lot of discussion about a pair of security vulnerabilities nicknamed Spectre and Meltdown. These affect all modern Intel processors, and (in the case of Spectre) many AMD processors and ARM cores. Spectre allows an attacker to bypass software checks to read data from arbitrary locations in the current address space; Meltdown allows an attacker to read arbitrary data from the operating system kernel’s address space (which should normally be inaccessible to user programs).

Both vulnerabilities exploit performance features (caching and speculative execution) common to many modern processors to leak data via a so-called side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these vulnerabilities, because of the particular ARM cores that we use.

To help us understand why, here’s a little primer on some concepts in modern processor design. We’ll illustrate these concepts using simple programs in Python syntax like this one:

t = a+b
u = c+d
v = e+f
w = v+g
x = h+i
y = j+k

While the processor in your computer doesn’t execute Python directly, the statements here are simple enough that they roughly correspond to a single machine instruction. We’re going to gloss over some details (notably pipelining and register renaming) which are very important to processor designers, but which aren’t necessary to understand how Spectre and Meltdown work.

For a comprehensive description of processor design, and other aspects of modern computer architecture, you can’t do better than Hennessy and Patterson’s classic Computer Architecture: A Quantitative Approach.

What is a scalar processor?

The simplest sort of modern processor executes one instruction per cycle; we call this a scalar processor. Our example above will execute in six cycles on a scalar processor.

Examples of scalar processors include the Intel 486 and the ARM1176 core used in Raspberry Pi 1 and Raspberry Pi Zero.

What is a superscalar processor?

The obvious way to make a scalar processor (or indeed any processor) run faster is to increase its clock speed. However, we soon reach limits of how fast the logic gates inside the processor can be made to run; processor designers therefore quickly began to look for ways to do several things at once.

An in-order superscalar processor examines the incoming stream of instructions and tries execute more than one at once, in one of several “pipes”, subject to dependencies between the instructions. Dependencies are important: you might think that a two-way superscalar processor could just pair up (or dual-issue) the six instructions in our example like this:

t, u = a+b, c+d
v, w = e+f, v+g
x, y = h+i, j+k

But this doesn’t make sense: we have to compute v before we can compute w, so the third and fourth instructions can’t be executed at the same time. Our two-way superscalar processor won’t be able to find anything to pair with the third instruction, so our example will execute in four cycles:

t, u = a+b, c+d
v    = e+f                   # second pipe does nothing here
w, x = v+g, h+i
y    = j+k

Examples of superscalar processors include the Intel Pentium, and the ARM Cortex-A7 and Cortex-A53 cores used in Raspberry Pi 2 and Raspberry Pi 3 respectively. Raspberry Pi 3 has only a 33% higher clock speed than Raspberry Pi 2, but has roughly double the performance: the extra performance is partly a result of Cortex-A53’s ability to dual-issue a broader range of instructions than Cortex-A7.

What is an out-of-order processor?

Going back to our example, we can see that, although we have a dependency between v and w, we have other independent instructions later in the program that we could potentially have used to fill the empty pipe during the second cycle. An out-of-order superscalar processor has the ability to shuffle the order of incoming instructions (again subject to dependencies) in order to keep its pipelines busy.

An out-of-order processor might effectively swap the definitions of w and x in our example like this:

t = a+b
u = c+d
v = e+f
x = h+i
w = v+g
y = j+k

allowing it to execute in three cycles:

t, u = a+b, c+d
v, x = e+f, h+i
w, y = v+g, j+k

Examples of out-of-order processors include the Intel Pentium 2 (and most subsequent Intel and AMD x86 processors), and many recent ARM cores, including Cortex-A9, -A15, -A17, and -A57.

What is speculation?

Reordering sequential instructions is a powerful way to recover more instruction-level parallelism, but as processors become wider (able to triple- or quadruple-issue instructions) it becomes harder to keep all those pipes busy. Modern processors have therefore grown the ability to speculate. Speculative execution lets us issue instructions which might turn out not to be required (because they are branched over): this keeps a pipe busy, and if it turns out that the instruction isn’t executed, we can just throw the result away.

To demonstrate the benefits of speculation, let’s look at another example:

t = a+b
u = t+c
v = u+d
if v:
   w = e+f
   x = w+g
   y = x+h

Now we have dependencies from t to u to v, and from w to x to y, so a two-way out-of-order processor without speculation won’t ever be able to fill its second pipe. It spends three cycles computing t, u, and v, after which it knows whether the body of the if statement will execute, in which case it then spends three cycles computing w, x, and y. Assuming the if (a branch instruction) takes one cycle, our example takes either four cycles (if v turns out to be zero) or seven cycles (if v is non-zero).

Speculation effectively shuffles the program like this:

t = a+b
u = t+c
v = u+d
w_ = e+f
x_ = w_+g
y_ = x_+h
if v:
   w, x, y = w_, x_, y_

so we now have additional instruction level parallelism to keep our pipes busy:

t, w_ = a+b, e+f
u, x_ = t+c, w_+g
v, y_ = u+d, x_+h
if v:
   w, x, y = w_, x_, y_

Cycle counting becomes less well defined in speculative out-of-order processors, but the branch and conditional update of w, x, and y are (approximately) free, so our example executes in (approximately) three cycles.

What is a cache?

In the good old days*, the speed of processors was well matched with the speed of memory access. My BBC Micro, with its 2MHz 6502, could execute an instruction roughly every 2µs (microseconds), and had a memory cycle time of 0.25µs. Over the ensuing 35 years, processors have become very much faster, but memory only modestly so: a single Cortex-A53 in a Raspberry Pi 3 can execute an instruction roughly every 0.5ns (nanoseconds), but can take up to 100ns to access main memory.

At first glance, this sounds like a disaster: every time we access memory, we’ll end up waiting for 100ns to get the result back. In this case, this example:

a = mem[0]
b = mem[1]

would take 200ns.

In practice, programs tend to access memory in relatively predictable ways, exhibiting both temporal locality (if I access a location, I’m likely to access it again soon) and spatial locality (if I access a location, I’m likely to access a nearby location soon). Caching takes advantage of these properties to reduce the average cost of access to memory.

A cache is a small on-chip memory, close to the processor, which stores copies of the contents of recently used locations (and their neighbours), so that they are quickly available on subsequent accesses. With caching, the example above will execute in a little over 100ns:

a = mem[0]    # 100ns delay, copies mem[0:15] into cache
b = mem[1]    # mem[1] is in the cache

From the point of view of Spectre and Meltdown, the important point is that if you can time how long a memory access takes, you can determine whether the address you accessed was in the cache (short time) or not (long time).

What is a side channel?

From Wikipedia:

“… a side-channel attack is any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms (compare cryptanalysis). For example, timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information, which can be exploited to break the system.”

Spectre and Meltdown are side-channel attacks which deduce the contents of a memory location which should not normally be accessible by using timing to observe whether another location is present in the cache.

Putting it all together

Now let’s look at how speculation and caching combine to permit the Meltdown attack. Consider the following example, which is a user program that sometimes reads from an illegal (kernel) address:

t = a+b
u = t+c
v = u+d
if v:
   w = kern_mem[address]   # if we get here crash
   x = w&0x100
   y = user_mem[x]

Now our out-of-order two-way superscalar processor shuffles the program like this:

t, w_ = a+b, kern_mem[address]
u, x_ = t+c, w_&0x100
v, y_ = u+d, user_mem[x_]

if v:
   # crash
   w, x, y = w_, x_, y_      # we never get here

Even though the processor always speculatively reads from the kernel address, it must defer the resulting fault until it knows that v was non-zero. On the face of it, this feels safe because either:

  • v is zero, so the result of the illegal read isn’t committed to w
  • v is non-zero, so the program crashes before the read is committed to w

However, suppose we flush our cache before executing the code, and arrange a, b, c, and d so that v is zero. Now, the speculative load in the third cycle:

v, y_ = u+d, user_mem[x_]

will read from either address 0x000 or address 0x100 depending on the eighth bit of the result of the illegal read. Because v is zero, the results of the speculative instructions will be discarded, and execution will continue. If we time a subsequent access to one of those addresses, we can determine which address is in the cache. Congratulations: you’ve just read a single bit from the kernel’s address space!

The real Meltdown exploit is more complex than this, but the principle is the same. Spectre uses a similar approach to subvert software array bounds checks.

Conclusion

Modern processors go to great lengths to preserve the abstraction that they are in-order scalar machines that access memory directly, while in fact using a host of techniques including caching, instruction reordering, and speculation to deliver much higher performance than a simple processor could hope to achieve. Meltdown and Spectre are examples of what happens when we reason about security in the context of that abstraction, and then encounter minor discrepancies between the abstraction and reality.

The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53 cores used in Raspberry Pi render us immune to attacks of the sort.

* days may not be that old, or that good

The post Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown appeared first on Raspberry Pi.

Read the whole story
Share this story
Delete

My Universal Performance Problem Advice

1 Share

I get asked for recommendations a lot.  Most of the time I have little to no data when asked to perform this sort of divination.  But as it turns out I have this ready-to-go universal advice that works for me, so I'm able to give the same recommendation all the time even with no data!  Handy, huh?

Here it is:

Load as little as you can.  Run as little as you can.  Use as little memory as you can.  Take the fewest dependencies you can.  Create the simplest UI that you can.  And measure it often.  Don’t just measure your one success metric, measure all the consumption and make it as small as you can.  Consider all the important targets, including small devices and large servers.  Nothing else will do, nor is anything else necessary. 

When you understand your costs you will be making solid choices.

Never use non-specific data you think you remember to justify anything.

Last, and most important of all, never take the advice of some smart-ass performance expert like me when you could get a good solid measurement instead.  :)

Read the whole story
Share this story
Delete

Anomalous Propulsion Drive Verified at NASA

1 Share
Here’s a very interesting development that has been brought up here on the Always Open thread, and also discussed on vortex-l. An article on Wired Uk by David Hambling reports how a US scientist named Guido Fetta has built a microwave thruster which works without any propellant and has had a NASA team conduct extensive […]
Read the whole story
Share this story
Delete

Ever have a day like this one?

1 Share
  • Check email and notice a message from somebody having trouble using SQLitePCL.raw on Windows Phone 8.1. Realize that I haven't run the test suite since I started working on the new build scripts. Assume that I broke something.

  • Hook up the automated test project to the output of the new build system. Sure enough, the tests fail.

  • Notice that the error message is different from the one in the user's email.

  • Realize that the user is actually using the old build system, not the new one. Wonder how that could have broken.

  • Bring up the old build system, run the tests. Yep, they fail here too. Must be something in the actual code.

  • Dig around for a while and try to find what changed.

  • Use git to go back to the last commit before I started the new build system stuff. Rebuild all. Run the tests. They pass. Good. Now I just have to diff and figure out which change caused the breakage.

  • git my working directory back to the current version of the code. Rebuild all and run the tests again to watch them fail again. BUT NOW THEY PASS.

  • Wonder if perhaps Visual Studio is less frustrating for people who drink Scotch in the mornings.

  • Decide that maybe something was flaky in my machine. The tests are passing again, so there's no problem.

  • Realize that the user wasn't actually running the test suite. He was trying to reference from his own project. And he had to do that manually, because I haven't published the nuget package yet. Maybe he just screwed up the reference or didn't copy all the necessary pieces.

  • Run the tests in the new build system to watch them pass there as well. But here they STILL FAIL.

  • Decide to take the build system out of the equation and just finish getting things working right with nuget. Build the unit test package separately in its own solution. Add a reference to the nuget package and start working out the issues.

  • Run the tests. Everything throws because the reference got added to the "bait" version of the PCL instead of the to the WP81 platform assembly. Oh well. This is what I need to be fixing anyway.

  • Notice that the .targets file didn't get properly imported into the test project when the package was installed. Wonder why. But that's gotta be why the platform assembly didn't get referenced.

  • Realize that the bait assembly somehow got referenced. Wonder why.

  • What is Scotch anyway? Go read several articles about single malt whiskey.

  • Decide to take nuget out of the equation and focus on why the new build system is producing dlls that won't load.

  • Google the error message "Package failed updates, dependency or conflict validation". I need to know exactly what was the cause of the failure.

  • Realize that the default search engine or IE is Bing. Do the same search in Google. Get different results.

  • Become annoyed when co-worker interrupts me to tell me that there is a new trailer for Guardians of the Galaxy.

  • Read a web page on the Microsoft website which explains how to get the actual details of that error message. Spend time wandering around Event Viewer until I see the right stuff.

  • Realize that the web page is actually talking about WinRT on the desktop, not Windows Phone.

  • Try to find a way to get developer-grade error messages in the Windows Phone emulator. Fail.

  • Notice that below the error message, Visual Studio's suggested resolution is to instead use a unit test project that is targeted for Windows Phone, even thought IT ALREADY IS.

  • Blame Steve Ballmer FOR EVERYTHING.

  • Wonder if WP81 is the only thing that broke. Run the tests for WinRT. They fail as well.

  • Get annoyed because the only way Visual Studio can run the unit tests for just one project is to unload all the others.

  • Get upset because the Visual Studio Reload Project command doesn't work like the way it did a week or two ago. Now it reloads all the projects instead of just the one I wanted. Did the installation of the Xamarin Visual Studio integration break it?

  • Go back to the very basics. Run the unit tests for plain old .NET 4.5. They pass.

  • Re-run the unit tests for WinRT to watch them fail again. NOW THEY PASS.

  • Realize the co-worker is absolutely right. The most important thing is to watch the Guardians of the Galaxy trailer.

  • Get annoyed because the sound on my MBP isn't working. Watch the whole trailer anyway, without sound.

  • Review all my project settings in the Visual Studio dialogs, just to see if I notice anything odd.

  • Go back to my web browser. Realize that the world of Scotch whiskey might actually be more complicated than Visual Studio.

  • Go home. Discover that the annual spring invasion of ants in our kitchen is proceeding nicely.

  • Fight some more with Visual Studio. Give up. Go to bed.

  • Wake up the next morning. Discover that the teenager's contribution to our war against the ants was to leave unrinsed plates by the sink. Thousands of ants feasting on cheesecake debris and syrup.

  • Open the laptop. Run diff to compare the csproj and vcxproj files from the old build system against the new one. See that there are no differences that should make any difference.

  • Change them all anyway. Update every setting to exactly match the old build system. One at a time. Run the test suite after each tweak so I can figure out exactly which of the seeminlgy-harmless changes caused the breakage.

  • Wait. My kid had cheesecake and waffles FOR DINNER?

  • Become seriously annoyed that Visual Studio changes the Output pane from "Tests" to "Build" EVERY SINGLE TIME I run the tests.

  • Finish getting all the settings to match. The tests still don't pass.

  • Try to remember if I ever done anything successfully. Anything at all. Distinctly recall that when I was mowing the lawn this weekend, the grass got shorter. Focus on that accomplishment. Build on that success.

  • Realize that the old build system works and the new one doesn't. There has to be a difference that I'm missing. I just have to find it.

  • Go back to the old build system. Rebuild all. Run the tests so I can watch them pass and start over from there. BUT NOW THEY'RE FAILING AGAIN.

  • Go do something else.

 

Read the whole story
Share this story
Delete

Introducing dotPeek 1.2 Early Access Program

1 Comment

It has been a while since dotPeek, our free .NET decompiler, received its latest update, but that doesn’t mean we put it aside. Today we’re ready to launch the dotPeek 1.2 Early Access Program that introduces a substantial set of new features.

Starting from version 1.2 dotPeek learns to perform as a symbol server and supply Visual Studio debugger with the information required to debug assembly code. This can be most useful when debugging a project that references an assembly from an external class library.

dotPeek listens for requests from Visual Studio debugger, generates PDB files and source files for the requested assemblies on demand, and returns them back to the debugger. dotPeek provides several options to choose exactly which assemblies you want it to generate symbol files for.

Symbol server options in dotPeek 1.2 EAP

To learn more on how to set up dotPeek as a symbol server and use it for debugging in Visual Studio, please refer to this guide.

In case that the Visual Studio cache already contains PDB files for certain assemblies but you would like to replace them with PDB files generated by dotPeek, use the option to generate PDB files manually. In order to do that, simply select an assembly in dotPeek’s Assembly Explorer, right-click it and choose Generate PDB.

Generate pdb in dotPeek 1.2

dotPeek can export to project and generate PDB files in the background, meaning that you can explore assemblies during PDB generation or assembly export. To address cases when it’s not clear whether PDB files were generated properly, dotPeek has a dedicated tool window that shows current status and results of PDB generation.

PDB generation status in dotPeek 1.2 EAP

In addition to the set of features that streamline debugging decompiled code, dotPeek 1.2 adds quick search and node filtering in various trees, most notably Assembly Explorer. Searching and filtering using lowerCamelHumps is supported for these scenarios.

Search in Assembly Explorer in dotPeek 1.2 EAP

If you’re interested to learn about other fixes and improvements made for dotPeek 1.2 EAP, this link should help you out.

Does the above sound enticing? Download dotPeek 1.2 EAP and give it a try!

Read the whole story
Share this story
Delete
1 public comment
marklam
2015 days ago
reply
Damn! In your face, Redgate :-)
Gateshead/Newcastle UK
Next Page of Stories