20 stories
·
3 followers

Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown

1 Share

Over the last couple of days, there has been a lot of discussion about a pair of security vulnerabilities nicknamed Spectre and Meltdown. These affect all modern Intel processors, and (in the case of Spectre) many AMD processors and ARM cores. Spectre allows an attacker to bypass software checks to read data from arbitrary locations in the current address space; Meltdown allows an attacker to read arbitrary data from the operating system kernel’s address space (which should normally be inaccessible to user programs).

Both vulnerabilities exploit performance features (caching and speculative execution) common to many modern processors to leak data via a so-called side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these vulnerabilities, because of the particular ARM cores that we use.

To help us understand why, here’s a little primer on some concepts in modern processor design. We’ll illustrate these concepts using simple programs in Python syntax like this one:

t = a+b
u = c+d
v = e+f
w = v+g
x = h+i
y = j+k

While the processor in your computer doesn’t execute Python directly, the statements here are simple enough that they roughly correspond to a single machine instruction. We’re going to gloss over some details (notably pipelining and register renaming) which are very important to processor designers, but which aren’t necessary to understand how Spectre and Meltdown work.

For a comprehensive description of processor design, and other aspects of modern computer architecture, you can’t do better than Hennessy and Patterson’s classic Computer Architecture: A Quantitative Approach.

What is a scalar processor?

The simplest sort of modern processor executes one instruction per cycle; we call this a scalar processor. Our example above will execute in six cycles on a scalar processor.

Examples of scalar processors include the Intel 486 and the ARM1176 core used in Raspberry Pi 1 and Raspberry Pi Zero.

What is a superscalar processor?

The obvious way to make a scalar processor (or indeed any processor) run faster is to increase its clock speed. However, we soon reach limits of how fast the logic gates inside the processor can be made to run; processor designers therefore quickly began to look for ways to do several things at once.

An in-order superscalar processor examines the incoming stream of instructions and tries execute more than one at once, in one of several “pipes”, subject to dependencies between the instructions. Dependencies are important: you might think that a two-way superscalar processor could just pair up (or dual-issue) the six instructions in our example like this:

t, u = a+b, c+d
v, w = e+f, v+g
x, y = h+i, j+k

But this doesn’t make sense: we have to compute v before we can compute w, so the third and fourth instructions can’t be executed at the same time. Our two-way superscalar processor won’t be able to find anything to pair with the third instruction, so our example will execute in four cycles:

t, u = a+b, c+d
v    = e+f                   # second pipe does nothing here
w, x = v+g, h+i
y    = j+k

Examples of superscalar processors include the Intel Pentium, and the ARM Cortex-A7 and Cortex-A53 cores used in Raspberry Pi 2 and Raspberry Pi 3 respectively. Raspberry Pi 3 has only a 33% higher clock speed than Raspberry Pi 2, but has roughly double the performance: the extra performance is partly a result of Cortex-A53’s ability to dual-issue a broader range of instructions than Cortex-A7.

What is an out-of-order processor?

Going back to our example, we can see that, although we have a dependency between v and w, we have other independent instructions later in the program that we could potentially have used to fill the empty pipe during the second cycle. An out-of-order superscalar processor has the ability to shuffle the order of incoming instructions (again subject to dependencies) in order to keep its pipelines busy.

An out-of-order processor might effectively swap the definitions of w and x in our example like this:

t = a+b
u = c+d
v = e+f
x = h+i
w = v+g
y = j+k

allowing it to execute in three cycles:

t, u = a+b, c+d
v, x = e+f, h+i
w, y = v+g, j+k

Examples of out-of-order processors include the Intel Pentium 2 (and most subsequent Intel and AMD x86 processors), and many recent ARM cores, including Cortex-A9, -A15, -A17, and -A57.

What is speculation?

Reordering sequential instructions is a powerful way to recover more instruction-level parallelism, but as processors become wider (able to triple- or quadruple-issue instructions) it becomes harder to keep all those pipes busy. Modern processors have therefore grown the ability to speculate. Speculative execution lets us issue instructions which might turn out not to be required (because they are branched over): this keeps a pipe busy, and if it turns out that the instruction isn’t executed, we can just throw the result away.

To demonstrate the benefits of speculation, let’s look at another example:

t = a+b
u = t+c
v = u+d
if v:
   w = e+f
   x = w+g
   y = x+h

Now we have dependencies from t to u to v, and from w to x to y, so a two-way out-of-order processor without speculation won’t ever be able to fill its second pipe. It spends three cycles computing t, u, and v, after which it knows whether the body of the if statement will execute, in which case it then spends three cycles computing w, x, and y. Assuming the if (a branch instruction) takes one cycle, our example takes either four cycles (if v turns out to be zero) or seven cycles (if v is non-zero).

Speculation effectively shuffles the program like this:

t = a+b
u = t+c
v = u+d
w_ = e+f
x_ = w_+g
y_ = x_+h
if v:
   w, x, y = w_, x_, y_

so we now have additional instruction level parallelism to keep our pipes busy:

t, w_ = a+b, e+f
u, x_ = t+c, w_+g
v, y_ = u+d, x_+h
if v:
   w, x, y = w_, x_, y_

Cycle counting becomes less well defined in speculative out-of-order processors, but the branch and conditional update of w, x, and y are (approximately) free, so our example executes in (approximately) three cycles.

What is a cache?

In the good old days*, the speed of processors was well matched with the speed of memory access. My BBC Micro, with its 2MHz 6502, could execute an instruction roughly every 2µs (microseconds), and had a memory cycle time of 0.25µs. Over the ensuing 35 years, processors have become very much faster, but memory only modestly so: a single Cortex-A53 in a Raspberry Pi 3 can execute an instruction roughly every 0.5ns (nanoseconds), but can take up to 100ns to access main memory.

At first glance, this sounds like a disaster: every time we access memory, we’ll end up waiting for 100ns to get the result back. In this case, this example:

a = mem[0]
b = mem[1]

would take 200ns.

In practice, programs tend to access memory in relatively predictable ways, exhibiting both temporal locality (if I access a location, I’m likely to access it again soon) and spatial locality (if I access a location, I’m likely to access a nearby location soon). Caching takes advantage of these properties to reduce the average cost of access to memory.

A cache is a small on-chip memory, close to the processor, which stores copies of the contents of recently used locations (and their neighbours), so that they are quickly available on subsequent accesses. With caching, the example above will execute in a little over 100ns:

a = mem[0]    # 100ns delay, copies mem[0:15] into cache
b = mem[1]    # mem[1] is in the cache

From the point of view of Spectre and Meltdown, the important point is that if you can time how long a memory access takes, you can determine whether the address you accessed was in the cache (short time) or not (long time).

What is a side channel?

From Wikipedia:

“… a side-channel attack is any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms (compare cryptanalysis). For example, timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information, which can be exploited to break the system.”

Spectre and Meltdown are side-channel attacks which deduce the contents of a memory location which should not normally be accessible by using timing to observe whether another location is present in the cache.

Putting it all together

Now let’s look at how speculation and caching combine to permit the Meltdown attack. Consider the following example, which is a user program that sometimes reads from an illegal (kernel) address:

t = a+b
u = t+c
v = u+d
if v:
   w = kern_mem[address]   # if we get here crash
   x = w&0x100
   y = user_mem[x]

Now our out-of-order two-way superscalar processor shuffles the program like this:

t, w_ = a+b, kern_mem[address]
u, x_ = t+c, w_&0x100
v, y_ = u+d, user_mem[x_]

if v:
   # crash
   w, x, y = w_, x_, y_      # we never get here

Even though the processor always speculatively reads from the kernel address, it must defer the resulting fault until it knows that v was non-zero. On the face of it, this feels safe because either:

  • v is zero, so the result of the illegal read isn’t committed to w
  • v is non-zero, so the program crashes before the read is committed to w

However, suppose we flush our cache before executing the code, and arrange a, b, c, and d so that v is zero. Now, the speculative load in the third cycle:

v, y_ = u+d, user_mem[x_]

will read from either address 0x000 or address 0x100 depending on the eighth bit of the result of the illegal read. Because v is zero, the results of the speculative instructions will be discarded, and execution will continue. If we time a subsequent access to one of those addresses, we can determine which address is in the cache. Congratulations: you’ve just read a single bit from the kernel’s address space!

The real Meltdown exploit is more complex than this, but the principle is the same. Spectre uses a similar approach to subvert software array bounds checks.

Conclusion

Modern processors go to great lengths to preserve the abstraction that they are in-order scalar machines that access memory directly, while in fact using a host of techniques including caching, instruction reordering, and speculation to deliver much higher performance than a simple processor could hope to achieve. Meltdown and Spectre are examples of what happens when we reason about security in the context of that abstraction, and then encounter minor discrepancies between the abstraction and reality.

The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53 cores used in Raspberry Pi render us immune to attacks of the sort.

* days may not be that old, or that good

The post Why Raspberry Pi isn’t vulnerable to Spectre or Meltdown appeared first on Raspberry Pi.

Read the whole story
Share this story
Delete

My Universal Performance Problem Advice

1 Share

I get asked for recommendations a lot.  Most of the time I have little to no data when asked to perform this sort of divination.  But as it turns out I have this ready-to-go universal advice that works for me, so I'm able to give the same recommendation all the time even with no data!  Handy, huh?

Here it is:

Load as little as you can.  Run as little as you can.  Use as little memory as you can.  Take the fewest dependencies you can.  Create the simplest UI that you can.  And measure it often.  Don’t just measure your one success metric, measure all the consumption and make it as small as you can.  Consider all the important targets, including small devices and large servers.  Nothing else will do, nor is anything else necessary. 

When you understand your costs you will be making solid choices.

Never use non-specific data you think you remember to justify anything.

Last, and most important of all, never take the advice of some smart-ass performance expert like me when you could get a good solid measurement instead.  :)

Read the whole story
Share this story
Delete

Anomalous Propulsion Drive Verified at NASA

1 Share
Here’s a very interesting development that has been brought up here on the Always Open thread, and also discussed on vortex-l. An article on Wired Uk by David Hambling reports how a US scientist named Guido Fetta has built a microwave thruster which works without any propellant and has had a NASA team conduct extensive […]
Read the whole story
Share this story
Delete

Ever have a day like this one?

1 Share
  • Check email and notice a message from somebody having trouble using SQLitePCL.raw on Windows Phone 8.1. Realize that I haven't run the test suite since I started working on the new build scripts. Assume that I broke something.

  • Hook up the automated test project to the output of the new build system. Sure enough, the tests fail.

  • Notice that the error message is different from the one in the user's email.

  • Realize that the user is actually using the old build system, not the new one. Wonder how that could have broken.

  • Bring up the old build system, run the tests. Yep, they fail here too. Must be something in the actual code.

  • Dig around for a while and try to find what changed.

  • Use git to go back to the last commit before I started the new build system stuff. Rebuild all. Run the tests. They pass. Good. Now I just have to diff and figure out which change caused the breakage.

  • git my working directory back to the current version of the code. Rebuild all and run the tests again to watch them fail again. BUT NOW THEY PASS.

  • Wonder if perhaps Visual Studio is less frustrating for people who drink Scotch in the mornings.

  • Decide that maybe something was flaky in my machine. The tests are passing again, so there's no problem.

  • Realize that the user wasn't actually running the test suite. He was trying to reference from his own project. And he had to do that manually, because I haven't published the nuget package yet. Maybe he just screwed up the reference or didn't copy all the necessary pieces.

  • Run the tests in the new build system to watch them pass there as well. But here they STILL FAIL.

  • Decide to take the build system out of the equation and just finish getting things working right with nuget. Build the unit test package separately in its own solution. Add a reference to the nuget package and start working out the issues.

  • Run the tests. Everything throws because the reference got added to the "bait" version of the PCL instead of the to the WP81 platform assembly. Oh well. This is what I need to be fixing anyway.

  • Notice that the .targets file didn't get properly imported into the test project when the package was installed. Wonder why. But that's gotta be why the platform assembly didn't get referenced.

  • Realize that the bait assembly somehow got referenced. Wonder why.

  • What is Scotch anyway? Go read several articles about single malt whiskey.

  • Decide to take nuget out of the equation and focus on why the new build system is producing dlls that won't load.

  • Google the error message "Package failed updates, dependency or conflict validation". I need to know exactly what was the cause of the failure.

  • Realize that the default search engine or IE is Bing. Do the same search in Google. Get different results.

  • Become annoyed when co-worker interrupts me to tell me that there is a new trailer for Guardians of the Galaxy.

  • Read a web page on the Microsoft website which explains how to get the actual details of that error message. Spend time wandering around Event Viewer until I see the right stuff.

  • Realize that the web page is actually talking about WinRT on the desktop, not Windows Phone.

  • Try to find a way to get developer-grade error messages in the Windows Phone emulator. Fail.

  • Notice that below the error message, Visual Studio's suggested resolution is to instead use a unit test project that is targeted for Windows Phone, even thought IT ALREADY IS.

  • Blame Steve Ballmer FOR EVERYTHING.

  • Wonder if WP81 is the only thing that broke. Run the tests for WinRT. They fail as well.

  • Get annoyed because the only way Visual Studio can run the unit tests for just one project is to unload all the others.

  • Get upset because the Visual Studio Reload Project command doesn't work like the way it did a week or two ago. Now it reloads all the projects instead of just the one I wanted. Did the installation of the Xamarin Visual Studio integration break it?

  • Go back to the very basics. Run the unit tests for plain old .NET 4.5. They pass.

  • Re-run the unit tests for WinRT to watch them fail again. NOW THEY PASS.

  • Realize the co-worker is absolutely right. The most important thing is to watch the Guardians of the Galaxy trailer.

  • Get annoyed because the sound on my MBP isn't working. Watch the whole trailer anyway, without sound.

  • Review all my project settings in the Visual Studio dialogs, just to see if I notice anything odd.

  • Go back to my web browser. Realize that the world of Scotch whiskey might actually be more complicated than Visual Studio.

  • Go home. Discover that the annual spring invasion of ants in our kitchen is proceeding nicely.

  • Fight some more with Visual Studio. Give up. Go to bed.

  • Wake up the next morning. Discover that the teenager's contribution to our war against the ants was to leave unrinsed plates by the sink. Thousands of ants feasting on cheesecake debris and syrup.

  • Open the laptop. Run diff to compare the csproj and vcxproj files from the old build system against the new one. See that there are no differences that should make any difference.

  • Change them all anyway. Update every setting to exactly match the old build system. One at a time. Run the test suite after each tweak so I can figure out exactly which of the seeminlgy-harmless changes caused the breakage.

  • Wait. My kid had cheesecake and waffles FOR DINNER?

  • Become seriously annoyed that Visual Studio changes the Output pane from "Tests" to "Build" EVERY SINGLE TIME I run the tests.

  • Finish getting all the settings to match. The tests still don't pass.

  • Try to remember if I ever done anything successfully. Anything at all. Distinctly recall that when I was mowing the lawn this weekend, the grass got shorter. Focus on that accomplishment. Build on that success.

  • Realize that the old build system works and the new one doesn't. There has to be a difference that I'm missing. I just have to find it.

  • Go back to the old build system. Rebuild all. Run the tests so I can watch them pass and start over from there. BUT NOW THEY'RE FAILING AGAIN.

  • Go do something else.

 

Read the whole story
Share this story
Delete

Introducing dotPeek 1.2 Early Access Program

1 Comment

It has been a while since dotPeek, our free .NET decompiler, received its latest update, but that doesn’t mean we put it aside. Today we’re ready to launch the dotPeek 1.2 Early Access Program that introduces a substantial set of new features.

Starting from version 1.2 dotPeek learns to perform as a symbol server and supply Visual Studio debugger with the information required to debug assembly code. This can be most useful when debugging a project that references an assembly from an external class library.

dotPeek listens for requests from Visual Studio debugger, generates PDB files and source files for the requested assemblies on demand, and returns them back to the debugger. dotPeek provides several options to choose exactly which assemblies you want it to generate symbol files for.

Symbol server options in dotPeek 1.2 EAP

To learn more on how to set up dotPeek as a symbol server and use it for debugging in Visual Studio, please refer to this guide.

In case that the Visual Studio cache already contains PDB files for certain assemblies but you would like to replace them with PDB files generated by dotPeek, use the option to generate PDB files manually. In order to do that, simply select an assembly in dotPeek’s Assembly Explorer, right-click it and choose Generate PDB.

Generate pdb in dotPeek 1.2

dotPeek can export to project and generate PDB files in the background, meaning that you can explore assemblies during PDB generation or assembly export. To address cases when it’s not clear whether PDB files were generated properly, dotPeek has a dedicated tool window that shows current status and results of PDB generation.

PDB generation status in dotPeek 1.2 EAP

In addition to the set of features that streamline debugging decompiled code, dotPeek 1.2 adds quick search and node filtering in various trees, most notably Assembly Explorer. Searching and filtering using lowerCamelHumps is supported for these scenarios.

Search in Assembly Explorer in dotPeek 1.2 EAP

If you’re interested to learn about other fixes and improvements made for dotPeek 1.2 EAP, this link should help you out.

Does the above sound enticing? Download dotPeek 1.2 EAP and give it a try!

Read the whole story
Share this story
Delete
1 public comment
marklam
1413 days ago
reply
Damn! In your face, Redgate :-)
Gateshead/Newcastle UK

The .NET Foundation and .NET Platform Innovation

2 Shares

.NET has been a bedrock of the Microsoft developer ecosystem ever since its initial release more than 12 years ago.  The over 6 million professional developers using .NET have built some of the most important software and solutions powering businesses, apps and sites today, and the 1.8 billion installs of.NET across devices have created a key foundation for productive application development.

Today, I am thrilled to share with you some important updates on the .NET platform, including a wide array of important innovations around .NET as well as the creation of the .NET Foundation to foster further innovation across the .NET ecosystem.

The .NET Foundation

Earlier today we announced the formation of the .NET Foundation, an independent organization created to foster open development and collaboration around the growing collection of open source technologies for .NET.  The .NET Foundation will serve as a forum for commercial and community developers alike with a set of practices and processes that strengthen the future .NET ecosystem.

As I highlighted recently, we have seen a significant increase in the amount of open source software that makes up the foundation of the .NET development ecosystem, both from Microsoft and from other developers in the .NET community.  The .NET Foundation builds upon this trend, and further helps the open ecosystem for .NET to flourish.

The .NET Foundation will start with 24 .NET open source projects under its stewardship, including the .NET Compiler Platform (“Roslyn”) and the ASP.NET family of open source projects, as well as the MimeKit and Mailkit libraries from Xamarin.

Our shared goals for the .NET Foundation are:

  • Open the development process for .NET: The .NET Foundation brings under a common umbrella existing and new relevant open source projects for the .NET platform, such as ASP.NET, Entity Framework and the recently released .NET Compiler Platform (“Roslyn”). The .NET Foundation will help establish this as the norm moving forward, so more and more .NET components and libraries can benefit from an open process that is transparent and welcomes participation.
  • Encourage customers, partners and the broader community to participate: The .NET Foundation will foster the involvement and direct code contributions from the community, both through its board members as well as directly from individual developers, through an open and transparent governance model that strengthens the future of .NET.
  • Promote innovation by a vibrant partner ecosystem and open source community: The .NET Foundation will encourage commercial partners and open source developers to build solutions that leverage the platform openness to provide additional innovation to .NET developers. This includes extending .NET to other platforms, extending Visual Studio to create new experiences, providing additional tools and extending the framework and libraries with new capabilities.

The .NET Foundation represents a key commitment to the open .NET ecosystem, and I look forward to what we will be able to deliver together via the Foundation.

.NET Innovation

Over the last 12 years, the .NET platform has delivered dozens of major innovations across the runtime, language, libraries and tools.  From projects like Language Integrated Query to the rich ASP.NET framework and more recently the Async/Await features for asynchronous programming, .NET has been a leading platform for productive application development in the industry.

Today, we had a chance to unveil the next batch of exciting innovations in the .NET platform, running the gamut from core runtime features to enabling new developer productivity tooling.  Together, this wave of .NET innovation represents an important next step for the .NET platform.

The .NET Compiler Platform - "Roslyn" (preview)

The .NET Compiler Platform project, known as "Roslyn", includes the next versions of the C# and VB compilers, as well as a compiler-as-a-service API that powers rich IDE integration, and opens up the compiler to all sorts of developer integrations. 

Today, the .NET Compiler Platform was released as open source by Microsoft Open Technologies, with the development team now working on CodePlex.  The open source compiler platform will enable a broader community of developers to contribute to the evolution of the project and to integrate the .NET compilers into a wide variety of projects.

The .NET Compiler Platform preview release also includes several new IDE features, highlighting what’s possible on top of the new platform.

It's not just Visual Studio that benefits from the open source .NET compiler platform.  Today at //build/, Miguel de Icaza of Xamarin showed how the .NET Compiler Platform can be used to provide a rich C# IntelliSense experience in Xamarin Studio running on a MacBook.

Open sourcing the .NET Compiler Platform and the C# and VB compilers opens up countless new opportunities for tools and services to be built around .NET.

C# and VB language Innovation (preview)

The preview versions of the C# and VB compilers included with today's .NET Compile Platform include an early look at some of the new features being considered for the next major version of the C# and VB languages.  Features like primary constructors, auto-property initializers and using statics generally help developers express common code patterns in an even more streamlined way.

// Using static class
using System.Console;
 
// Primary constructor
class Point(double x, double y)
{
    // Auto-property initializers and getter-only auto-properties
    public double X { get; } = x;
    public double Y { get; } = y;
    public void PrintMe()
    {
        WriteLine("{0}, {1}", X, Y);
    }
} 

.NET Native (preview)

The developer productivity of C# and .NET has been a core value proposition for the .NET platform.  With .NET Native, we are marrying that productivity with the ability to generate binaries with performance on par with native code. 

.NET Native is an ahead-of-time compiler for .NET which leverages our C++ compiler's optimizer to offer improvements to startup time, memory usage and overall application performance.  Today's preview release lets developers try out this new compilation technology for Windows Store applications targeting X64 and ARM. 

Read more about .NET Native on the .NET blog.

Next-gen JIT and SIMD (preview)

At the core of .NET, we have been working on a next-generation JIT for .NET and the CLR.   Today, we released the third preview release of this new .NET JIT compiler (codename "RyuJIT"), offering significant benefits to application startup and performance transparently to application developers.

Today’s preview is the first to also enable new developer scenarios, such as providing new .NET APIs that can leverage the SIMD (Single Instruction, Multiple Data) support in modern processors for SSE2 and AVX instruction sets.  In this example, floating point arithmetic in a tight loop is vectorized, significantly increasing overall throughput.

Vector<float> reals = vx;
Vector<float> imags = vy;

do
{
   // This work will be vectorized using hardware SIMD instructions
   reals = reals * reals - imags * imags + vx;
   imags = reals * imags + reals * imags + vy;
 
   // … do more work …
 
} while (!done);

The SIMD APIs are available on NuGet.

Xamarin Partnership

Last November, we announced a partnership with Xamarin to enable C# and Visual Studio developers to target additional mobile devices including iOS and Android. 

By using .NET Portable Class Libraries, developers can easily share libraries as well as application logic across their device applications as well as with their backend implementations. 

Visual Studio and .NET offer outstanding developer productivity for application developers targeting the Windows family of devices.  With Xamarin, developers can take this productivity to iOS and Android as well.

.NET Mobile Services

Azure Mobile Services provides an easy-to-use mobile backend as a service connected to Microsoft Azure.  Last month, Scott Guthrie announced the preview availability of .NET support for Mobile Services, and today, we've taken this another step forward.

With .NET mobile services, you get the simple API connection to Azure-hosted data storage, combined with the flexible ASP.NET Web API for customizing table behavior.

Project "Orleans" (preview)

Based on work started in Microsoft Research, "Orleans" is an actor-based cloud programming model, offering a straightforward approach to building distributed high-scale computing applications, without the need to learn and apply complex concurrency or other scaling patterns. Orleans was designed for use in the cloud, and has been used extensively in Microsoft Azure. 

Read more about Orleans on the .NET Blog.

Conclusion

.NET is one of the world's leading developer platforms, and a critical technology for Microsoft and millions of developers worldwide.  The .NET Foundation will foster open development and collaboration around the growing collection of open source technologies for .NET, supported by Microsoft and other organizations investing in the .NET platform.  The next wave of .NET innovation being previewed today represents a huge step in the evolution of the .NET platform.

And this is just the beginning, the pipeline of innovation in .NET is in full gear leading up to the next version of .NET, with features to help you create the next generation of applications across Windows Server and Azure, as well as the desktop and devices.

Namaste!

Read the whole story
Wilka
1413 days ago
reply
Newcastle, United Kingdom
Share this story
Delete
Next Page of Stories