Give it Five Minutes

Give it five minutes – Jason Fried

A few years ago I used to be a hothead. Whenever anyone said anything, I’d think of a way to disagree. I’d push back hard if something didn’t fit my world-view.

It’s like I had to be first with an opinion – as if being first meant something. But what it really meant was that I wasn’t thinking hard enough about the problem. The faster you react, the less you think. Not always, but often.

It’s easy to talk about knee jerk reactions as if they are things that only other people have. You have them too. If your neighbor isn’t immune, neither are you.

This came to a head back in 2007. I was speaking at the Business Innovation Factory conference in Providence, RI. So was Richard Saul Wurman. After my talk Richard came up to introduce himself and compliment my talk. That was very generous of him. He certainly didn’t have to do that.

And what did I do? I pushed back at him about the talk he gave. While he was making his points on stage, I was taking an inventory of the things I didn’t agree with. And when presented with an opportunity to speak with him, I quickly pushed back at some of his ideas. I must have seemed like such an asshole.

His response changed my life. It was a simple thing. He said “Man, give it five minutes.” I asked him what he meant by that? He said, it’s fine to disagree, it’s fine to push back, it’s great to have strong opinions and beliefs, but give my ideas some time to set in before you’re sure you want to argue against them. “Five minutes” represented “think”, not react. He was totally right. I came into the discussion looking to prove something, not learn something.

This was a big moment for me.

Richard has spent his career thinking about these problems. He’s given it 30 years. And I gave it just a few minutes. Now, certainly he can be wrong and I could be right, but it’s better to think deeply about something first before being so certain you’re right.

There’s also a difference between asking questions and pushing back. Pushing back means you already think you know. Asking questions means you want to know. Ask more questions.

Learning to think first rather than react quick is a life long pursuit. It’s tough. I still get hot sometimes when I shouldn’t. But I’m really enjoying all the benefits of getting better.

If you aren’t sure why this is important, think about this quote from Jonathan Ive regarding Steve Jobs’ reverence for ideas:

And just as Steve loved ideas, and loved making stuff, he treated the process of creativity with a rare and a wonderful reverence. You see, I think he better than anyone understood that while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.

That’s deep. Ideas are fragile. They often start powerless. They’re barely there, so easy to ignore or skip or miss.

There are two things in this world that take no skill: 1. Spending other people’s money and 2. Dismissing an idea.

Dismissing an idea is so easy because it doesn’t involve any work. You can scoff at it. You can ignore it. You can puff some smoke at it. That’s easy. The hard thing to do is protect it, think about it, let it marinate, explore it, riff on it, and try it. The right idea could start out life as the wrong idea.

So next time you hear something, or someone, talk about an idea, pitch an idea, or suggest an idea, give it five minutes. Think about it a little bit before pushing back, before saying it’s too hard or it’s too much work. Those things may be true, but there may be another truth in there too: It may be worth it.

From: https://signalvnoise.com/posts/3124-give-it-five-minutes

History of land in RSA

Knowing oneself and admitting to your own shortcomings is in all probability the most challenging aspect of being human. To those of us with the inherent incapability of doing so, it is so much easier to distance ourselves from our shortcomings and blame it upon something or someone which operates out of our own sphere of influence.
In the most pathetic case of projection, we even take off the identity of the entity which we blame. It becomes a scenario of: “They” told me … “People” says…and of course the very popular: “White people are…”
This is nothing new amongst nations. The absolutely dreadful situation of the German population after 1918 created a perfect environment for this. The nation lost it government, the promises of peace and prosperity at Versailles came to nothing, people lost their property, their jobs, their lives…they were unemployed, very very poor and with no hope. And somebody was to blame.
Now we should dispute the fact that the Jews of Europe had a thing or two to answer for, but Adolf Hitler managed to identify a minority of people who refused, for generations, to conform to German society and demonise them as the scapegoat. He used his “Mein Kampf” and his Nürmberg Laws and he made the Jews of Europe into a monster and he convinced the German people, and many others, that this monster had to be destroyed. Thabo Mbeki did exactly the same by branding the white person in this country as “settler”.
This is no different from what Black South Africa is doing to white South Africa. Through the hate filled propaganda of : “You stole our land! You made us into slaves! You keep us poor!” they demonise the white South African into some sort of monster.
Through legislation such as Black Economic Empowerment and Affirmative Action they effectively create a barrier blocking the white minority from active participation in the economic and social development of a country which the white man have inhabited, cultivated, developed and defended for 526 years, ever since Bartolomeus Dias planted his first padrao on the coastline of the southern tip of this continent.
Through portraying the white South African as a racist, inbred idiot walking around in khaki shorts and mistreating all and everybody around him, the white man in South Africa is made into a typical scapegoat for everything that is wrong.
The White South African, by developing and expanding his cultural heritage, his religious beliefs and his entire orientation did not conform to the standards of Africa.
He refused to accept the absolutism of chieftainship as a form of government, he refused polygamy on basis of religion, he refused to pray to the ancestors because he was a Christian, he refused to leave his social orientation of the individual being a building block of society behind in favour of the African belief that society defines the individual. In short, he refused to betray himself.
And in being what he was, the White South African developed a country of industry and agriculture and infrastructure. He turned South Africa into a country where the first heart transplant could happen, where enough food was produced to export it to other countries, where gold and diamonds were mined and wealth created. And this all happened in less than 200 years (between 1780 and 1980).
In the 3000 years since the end of the Stone Age, the indigenous people of Africa could not manage to create an infrastructure, could not mine or produce export, could, in fact not succeed in building anything higher than one storey and could not write down anything as reference for future generations, because they could not manage to master the art of writing. In fact, when the first Europeans arrived on 6 April 1652 it was 1974 years after Ptolemy I built the magnificent library of Alexandria – and in Southern Africa the indigenous people still could do no more than a few rock paintings and a clay pot with patterns on it.
Today, this development, this contribution of the descendants of Europe has become a threat to the Black South African. He cannot compare. He has no contribution that can remotely compare to what the white man created and therefore he has to fall back on what primal instinct tells him to do: Destroy that which is a threat to you!
It is against this background that the white South African is demonised as a slaver and murderer who stole land. Let us put this in perspective:
In the first place: The Europeans who came with Van Riebeeck had no intention to stay at the Cape. We can clearly determine this from the repeated application for transfer to Batavia or Amsterdam made by almost every Company servant. The few men who decided to make this their homeland, did so because they came to love the land.
They wanted to develop and grow here. And in the written evidence, left us by the men who did not intend to stay and therefore had no reason to lie, it is written down over and over again that the Europeans settled on uninhabited land. They exchanged land for cattle and money and traded with the nomadic indigenous people.
The Company decided to import slaves. I emphasize import, because no indigenous person in this country was ever put into slavery! In actual fact, the slaves who were brought in from Madagascar and Batavia and Ceylon and East Africa were the ancestors of an entirely new group of people: the Coloured nation of South Africa who adopted the customs and culture of the European.
Ever wondered why they did not adopt the custom of Africa?
Because they were not exposed to it, that is why! Nobody at the Cape ever set eyes on a black person for 130 years before the first Trekboere met the Xhosa in the Valleys of the Amatola around 1770! These slaves also added to the bloodline of the European settlers, as did the French Hugenots of 1688 and the British Settlers of 1820. The White South African was a new nation, born in Africa. This nation called its language, Afrikaans, after Africa. This nation called itself after Africa – Afrikaners.
On the first of December 1834 slavery was abolished in the Cape Colony. This is two years before the start of the Great Trek. The white man in South Africa knew nothing of the existence of the Zulu, the Tswana, the Sotho, the Venda…and he was at war with the Xhosa. It is chronologically impossible that indigenous people could be held in slavery, if the so-called slave masters did not even know of their existence before the abolition of slavery.
Let us look at the “great” Shaka Zulu and the Zulu nation. Remember that the Europeans landed in South Africa in 1652. Shaka kaSenzaghakohona was born around 1787. He managed to unite, through force and murder and rampage a number of small tribes into the Zulu nation around 1819. Before that year, there WAS no Zulu people. A question of mathematics: The Zulu nation came into existence only 167 years after the arrival of Van Riebeeck. What logic can possibly argue that the Europeans took anything away from the Zulu-people?
So when did the black man establish himself in South Africa and how? The answer lies in the Mfecane: Mfecane (Zulu: [m̩fɛˈkǀaːne],[note 1] crushing), also known by the Sesotho name Difaqane (scattering, forced dispersal or forced migration[1]) or Lifaqane, was a period of widespread chaos and warfare among indigenous ethnic communities in southern Africa during the period between 1815 and about 1840.
As King Shaka created the militaristic Zulu Kingdom in the territory between the Tugela River and Pongola River, his forces caused a wave of warfare and disruption to sweep to other peoples. This was the prelude of the Mfecane, which spread from there. The movement of peoples caused many tribes to try to dominate those in new territories, leading to widespread warfare; consolidation of other groups, such as the Matabele, the Mfengu and the Makololo; and the creation of states such as the modern Lesotho.
Mfecane is used primarily to refer to the period when Mzilikazi, a king of the Matabele, dominated the Transvaal. During his reign, roughly from 1826 to 1836, he ordered widespread killings and devastation to remove all opposition. He reorganised the territory to establish the new Ndebele order. The death toll has never been satisfactorily determined, but the whole region became nearly depopulated. Normal estimates for the death toll range from 1 million to 2 million.
The black man established himself in this barren land now known as South Africa a full 174 years AFTER the white man. How dare you then call me a settler when you are nothing more? If I don’t belong here, certainly neither do you.
Land stolen from the black man? No. The land occupied by the Boer-people was land that nobody lived on, for the pure and simple reason that the original people of South Africa were massacred and wiped out in a racist genocide by the ancestors of the current black population of South Africa. The very same thing that is now repeated with the white man. The white man has a full and legal and historical claim to his part of this country, including land. And the black man who disputes that is welcome to bring evidence of the contrary. Remember, popular liberal myth, propagandistic expressions and loud shouting and burning and looting to hide your own incapability is not evidence. It is barbarism.
The popular myth of “the end of colonialism” is a lie also. Colonialism in South Africa ended on 31 May 1961 when the country became a Republic. White minority rule was not colonialism, because the white South African belongs here – you cannot colonise your own country.
The entire uproar about white oppression and white guilt and white debt is based, exactly like the concept of the rainbow nation and its Africa-democracy, on one big lie. In Afrikaans, a language of Africa, we say: However swiftly the lie might travel, truth will catch up one day.
Black South Africa might as well realise that the time of the lie is running out. Your stereotyping of the white man and apartheid as the cause of everything, cannot hold much longer.
You cannot hide rotting meat under gift wrap for eternity.
Some time in the very near future you will have to own up and explain how you could hold a small minority of oppressed people responsible for the disaster that you have made of a country which has the potential of being a place of safety, a welcome and hospitable home, to all its children whether they be black, white, coloured or Indian.
The black man holds the key to the final destruction of what is left, or the final realisation that we have no other choice but to peacefully co-exist. The black South African can no longer avoid admitting that the destruction of the white South African necessarily means the destruction of everything and everyone left on the southern tip of Africa.
By Daniël Lötter
Source
South Africa Today – South Africa News

Returning HTML from a filter

When your filter returns HTML, you need to do stuff for Angular to be OK with it.

Step 1: Tell Angular it’s HTML it can trust:

//some filter code...

return $sce.trustAsHtml('html in here');

The $sce service can be injected into your filter like any other Angular service.

Step 2: Test it

describe('some filter tests', function () {
   beforeEach(module('my module'));

var f;

beforeEach(inject(function ($filter) {
      f = $filter('myFilter');
}));

describe('Given some thing...', function () {

  it('should should return me some html', function () {
    var filterResult = f('some input value').$$unwrapTrustedValue();
    expect(filterResult).toBe('<strong>HELLO</strong>&nbsp;<small>little people</small>');
  });
 });
});

Speed up git in Windows 7

Git in the bash terminal in Windows 7 seems to slow down to a crawl once you start fiddling with your bash prompt and whatnot. I finally got sick of it and found out that adding this sorts it out:

$ git config --global core.preloadindex true
$ git config --global core.fscache true
$ git config --global gc.auto 256

 

Blatantly plagiarized from StackOverFlow.

Steady as she goes

 

Steady movement is more important than speed, much of the time. So long 
as there is a regular progression of stimuli to get your mental hooks 
into, there is room for lateral movement. Once this begins, its rate is 
a matter of discretion. 

Corwin, Prince of Amber

 

Index vs FTS… here’s how a DB decides

[Original: http://use-the-index-luke.com/blog/2014-07/finding-all-the-red-mms]

Finding All the Red M&Ms: A Story of Indexes and Full‑Table Scans


In this guest post, Chris Saxon explains a very important topic using an analogy with chocolates: When does a database use an index and when is it better not using it. Although Chris explanation has the Oracle database in mind, the principles apply to other databases too.

A common question that comes up when people start tuning queries is “why doesn’t this query use the index I expect?”. There are a few myths surrounding when database optimizers will use an index. A common one I’ve heard is that an index will be used when accessing 5% or less of the rows in a table. This isn’t the case however – the basic decision on whether or not to use an index comes down to its cost.

How do databases determine the cost of an index?

Before getting into the details, let’s talk about chocolate! Imagine you have 100 packets of M&M’s. You also have a document listing the colour of each M&M and the bag it’s stored in. This is ordered by colour, so we have all the blue sweets first, then the brown, green and so on like so:

You’ve been asked to find all the red M&M’s. There’s a couple of basic ways you could approach this task:

Method 1

Get your document listing the colour and location of each M&M. Go to the top of the “red” section. Lookup the location of the first red M&M, pick up the bag it states, and get the sweet. Go back to your document and repeat the process for the next red chocolate. Keep going back-and-forth between your document and the bags until you’ve reached the end of the red section.

Method 2

Pick up a number of bags at a time (e.g. 10), empty their contents out, pick out the red chocolates and return the others (back to their original bag).

Which approach is quicker?

Intuitively the second approach appears to be faster. You only select each bag once and then do some filtering of the items inside. Whereas with the first approach you have done a lot of back-and-forth between the document and the bags. This means you have to look into each bag multiple times.

We can be a bit more rigorous than this though. Let’s calculate how many operations we need to do to get all the red chocolates in each case.

When going between the document and the bags (method 1), each time you lookup the location of a new sweet and fetch that bag that’s a new operation. You have 100 bags with around 55 sweets in each. This means you’re doing roughly 920 (100 bags x 55 sweets / 6 colours) operations (plus some work to find the red section in your document). So the “cost” of using the document is around 920.

With the second approach you collect 10 bags in one step. This means you do ( 100 bags / 10 bags per operation = ) 10 operations (plus some filtering of the chocolates in them), giving a “cost” of 10.

Comparing these costs (920 vs. 10), method 2 is the clear winner.

Let’s imagine another scenario. Mars have started doing a promotion where around 1 in 100 bags contain a silver M&M. If you get the silver sweet, you win a prize. You want to find the silver chocolate!

In this case, using method 1, you go to the document to find the location of the single sweet. Then you go to that bag and retrieve the sweet. One operation (well two, including going to the document to find location of the silver chocolate), so we have a cost of two.

With method 2, you still need to pick up every single bag (and do some filtering) just to find one sweet – the cost is fixed at 10. Clearly method 1 is far superior in this case.

What have M&M’s got to do with databases?

When Oracle stores a record to the database, it is placed in a block. Just like there are many M&Ms in a bag, (normally) there are many rows in a block. When accessing a particular row, Oracle fetches the whole block and retrieves the requested row from within it. This is analogous to us picking up a bag of M&Ms and then picking a single chocolate out.

When doing an index-range scan, Oracle will search the index (the document) to find the first value matching your where clause. It then goes back-and-forth between the index and the table blocks, fetching the records from the location pointed to by the index. This is similar to method 1, where you continually switch between the document and the M&M bags.

As the number of rows accessed by an index increases, the database has to do more work. Therefore the cost of using an index increases in line with the number of records it is expected to fetch.

When doing a full table scan (FTS), Oracle will fetch several blocks at once in amulti-block read. The data fetched is then filtered so that only rows matching your where clause are returned (in this case the red M&Ms) – the rest are discarded. Just like in method 2.

The expected number of rows returned has little impact on the work a FTS does. Its basic cost is fixed by the size of the table and how many blocks you fetch at once.

When fetching a “high” percentage of the rows from a table, it becomes far more efficient to get several blocks at once and do some filtering than it is to visit a single block multiple times.

When does an index scan become more efficient than a FTS?

In our M&M example above, the “full-table scan” method fetches all 100 bags in 10 operations. Whereas with “index” approach requires a separate operation for each sweet. So an index is more efficient when it points to 10 M&Ms or less.

Mars puts around 55 M&M’s in each bag, so as a percentage of the “table” that’s just under ( 10 M&M’s / (100 bags * 55 sweets) * 100 = ) 0.2%!

What if Mars releases some “giant” M&Ms with only 10 sweets in a bag? In this case there’s fewer sweets in total, so the denominator in the equation above decreases. Our FTS approach is still fixed at a “cost” of 10 for the 100 bags. This means the point at which an index is better is when accessing approximately ( 10/1000*100 = ) 1% of the “rows”. A higher percentage, but still small in real terms.

If they released “mini” M&Ms with 200 in a bag, the denominator would increase. This means that the index is more efficient when accessing a very small percentage of the table!

So as you increase the space required to store a row, an index becomes more effective than a FTS. The number of rows accessed by the index remains fixed. The number of blocks required to store the data increases however, making the FTS more expensive and leading to it having a higher cost.

There’s a big assumption made in the above reasoning however. It’s that there’s no correlation between the order M&M’s are listed in the document and which bag they are in. So, for example, the first red M&M (in the document) may be in bag 1, the second in bag 56, the third in bag 20, etc.

Let’s make a different assumption – that the order of red chocolates in the document corresponds to the order they appear in the bags. So the first 9 red sweets are in bag 1, the next 9 in bag 2 etc. While you still have to visit all 100 bags, you can keep the last bag accessed in your hand, only switching bags every 9 or so sweets. This reduces the number of operations you do, making the index approach more efficient.

We can take this further still. What if Mars changes the bagging process so that only one colour appears in each bag?

Now, instead of having to visit every single bag to get all the red sweets, you only have to visit around ( 100 bags / 6 colours) 16 bags. If the sweets are also placed in the bags in the same order they are listed in the document (so M&M’s 1-55 are all blue and in bag 1, bag 2 has the blue M&M’s 56-100, and so on until bag 100, which has yellow M&M’s 496-550) you get the benefits of not switching bags compounded with the effect of having fewer bags to fetch.

This principle – how closely the order of records in a table matches the order they’re listed in a corresponding index – is referred to as the clustering factor. This figure is lower when the rows appear in the same physical order in the table as they do in the index (all sweets in a bag are the same colour) and higher when there’s little or no correlation.

The closer the clustering factor is to the number of blocks in a table the more likely it is that the index will be used (it is assigned a lower cost). The closer it is to the number of rows in a table, the more likely it is a FTS will be chosen (the index access is given a higher cost).

Bringing it all together

To sum up, we can see the cost-based optimizer decides whether to use an index or FTS by:

  • Taking the number of blocks used to store the table and dividing this by the number of blocks read in a multi-block read to give the FTS cost.

  • For each index on the table available to the query:

    • Finding the percentage of the rows in the table it expects a query to return (the selectivity)

    • This is then used to determine the percentage of the index expected to be accessed

    • The selectivity is also multiplied by the clustering factor to estimate the number of table blocks it expects to access to fetch these rows via an index

    • Adding these numbers together to give the expected cost (of the index)

  • The cost of the FTS is then compared to each index inspected and the access method with the lowest cost used.

This is just an overview of how the (Oracle) cost-based optimizer works. If you want to see the formulas the optimizer uses have a read of Wolfgang Breitling’s “Fallacies of the Cost Based Optimizer” paper or Jonathan Lewis’ Cost-Based Oracle Fundamentals book. The blogs of Jonathan LewisRichard Foote and of course Markus’ articles on this site also contain many posts going into this subject in more detail.

 

Mocking dependencies for Angular tests

Two approaches are available

  1. Configure the $provide service in your test to supply the mocked dependency when the injector is materialising your SUT:
'use strict';

describe('Company Input Metrics controller', function () {
  var scope, controllerFactory, mockRepo = {}, q, cimsData, repoDefer, provide;

  function createController() {
    return controllerFactory('CIMCtrl', {
      $scope: scope,
    });
  }

  beforeEach(module('adminApp', function($provide) {
    // here the provide service is being scoped for later use
    provide = $provide;
  }));

  beforeEach(inject(function ($controller, $rootScope, $q) {
    scope = $rootScope.$new();
    q = $q;
    controllerFactory = $controller;
  }));
  
  beforeEach(function () {
    cimsData = [{
      nominalGDPPotential: 1,
      equityRiskBeta: 1,
      riskFree: 1,
      equityRiskPremium: 1,
      costOfCapital: 1
    }];

    repoDefer = q.defer();
    repoDefer.resolve(cimsData);
    mockRepo.getCims = sinon.stub().returns(repoDefer.promise);
    
    // here the provide service is being configured to supply the mockRepo when the 'CIMRepository' needs to be injected
    provide.value('CIMRepository', mockRepo);
  });

  describe('Given the user wants to configure company metrics', function () {

    describe('When the user views the metrics to edit them', function () {

      it('Then the metrics are displayed', function () {
        createController();
        scope.$digest();

        expect(mockRepo.getCims.called).toBeTruthy();
        expect(scope.cims).toBe(cimsData);
      });

    });
  });
});
	

 

2. Manually set the dependency to the mock in your test:

'use strict';

describe('Company Input Metrics controller', function () {
  var scope, controllerFactory, mockRepo = {}, q, cimsData, repoDefer;

  function createController() {
    return controllerFactory('CIMCtrl', {
      $scope: scope,
      // here the CIMRepository is manually being set to the the mockRepo for the controller
      CIMRepository: mockRepo
    });
  }

  beforeEach(module('adminApp'));

  beforeEach(inject(function ($controller, $rootScope, $q) {
    scope = $rootScope.$new();
    q = $q;
    controllerFactory = $controller;
  }));
  
  beforeEach(function () {
    cimsData = [{
      nominalGDPPotential: 1,
      equityRiskBeta: 1,
      riskFree: 1,
      equityRiskPremium: 1,
      costOfCapital: 1
    }];

    repoDefer = q.defer();
    repoDefer.resolve(cimsData);
    mockRepo.getCims = sinon.stub().returns(repoDefer.promise);
  });

  describe('Given the user wants to configure company metrics', function () {

    describe('When the user views the metrics to edit them', function () {

      it('Then the metrics are displayed', function () {
        createController();
        scope.$digest();

        expect(mockRepo.getCims.called).toBeTruthy();
        expect(scope.cims).toBe(cimsData);
      });

    });
  });
});

Go either way I suppose. I prefer manually setting the dependencies whenever possible. It’s more terse and at the end of the day, just plain JS which suits me fine.

Other things worth noting are the patterns used in the test for dealing with promises. The magic is here:

    repoDefer = q.defer();

    // this will resolve .then(fn(cimsData))...
    repoDefer.resolve(cimsData);

    mockRepo.getCims = sinon.stub().returns(repoDefer.promise);

Copy blobs around like a boss

Import-Module Azure

$sourceAccount = 'myvids'
$sourceKey = '@@@@@@@'

$destAccount = 'destvids'
$destKey = '@@@@@@@'

$containerName = 'videos'

$sourceContext = New-AzureStorageContext $sourceAccount $sourceKey
$destContext = New-AzureStorageContext $destAccount $destKey

$blobs = Get-AzureStorageBlob `
    -Context $sourceContext `
    -Container $containerName

$copiedBlobs = $blobs |
    Start-AzureStorageBlobCopy `
        -DestContext $destContext `
        -DestContainer $containerName `
        -Verbose 

$copiedBlobs | Get-AzureStorageBlobCopyState