Blog

Challenges of Relaunching Community Cryptocurrency

1-TY0eUcLT6us5Jz1VT1Tymg

This posted is based on my experience of the challenges encountered when we tried to relaunching a dormant cryptocurrency. So if you are planning on reviving one, you are going to deal with these challenges to some extend. Most of these challenges you would have encountered in one situation or the other, even if you have never been involved in cryptocurrency.

  1. Security:
    • Lot of existing altcoin wallets have not been updated with right security patches for a very long time. Therefore, one of the main challenge will be to make sure the existing code base/libraries is updated with right security patches. In our case, last time the code/external libraries was updated was in 2015.
  1. Team communication, structure & collaboration:
    • If team members are co-located and solely working on coin, it is easy to build rapport and trust among the members. However this becomes challenging when you have team members scattered around the world and most of them have families and day job. Since you do not meet each other when performing the task, relying on each other to progress tasks a stronger level of trust is needed. In our case, we are using applications such as Slack, Trello, Github & Discord for communication and collaboration. Also we have a vetting process for recruiting new team members to make sure they fit in the team.
    • Another challenge you might face is around team structure and decision making. What kind of team structure will you be following? Hierarchy, flat or some other. If you are working on a community based cryptocurrency, should you be engaging rest of the community for any kind of decision making or should only handful of members make decision about the coin?
    • Currently we are a small team so communication, collaboration and decision making is easy to manage. However, I am looking forward to seeing what challenges lie ahead as the core team grows.
  1. Marketing strategy:
    • You might encounter challenge around marketing strategy which will include social media, exchanges, community engagement, roadmap and so forth. If you can liaise with the old team and get access to existing market strategy (if there is one) then it will make your life a lot more easy. Otherwise you might face same challenges as we did, such as:
        1. Social media channels & website: We didn’t have much luck getting right access to the existing social media channels/forums from the old team. This meant we had to start all over again by setting up new social media channels and website. It also meant publishing posts/ articles on different social channels/forums to let existing community know about us, what we are doing with the coin, where we are heading (roadmap), where they could get more information about the coin and how they can come onboard with this new project.
        2. Exchanges: Like social media, we had to reach out to existing exchanges to let them know about the relaunch so they could update appropriate links. Same time to increase our coin exposer, we reached out to new exchanges. Some of these exchanges required us to pay certain amount of fiat/cryptocurrency inorder to be listed and others it was online voting. Incase for voting, we had to reach out to the wider community via social medium to get their help.
        3. Community engagement: While you work on getting new community members (i.e. via using airdrops, twitter etc) on board you will also need to make sure you are engaging with existing members to get their buy in into what new team is trying to achieve. Which means actively engaging in answering their questions, comments and concerns. It could be anything from “How to” guide, hard fork, roadmap, exchanges, old vs new channels, mining and even trolls. All of this requires time and commitment from the team. There is a lot to the community engagement and I am still learning.
  1. Testing:
    • Cryptocurrency has varying challenges when it comes to testing. You have the functional and non functional side of it. The following are some of the challenges you will have to deal with when involved in testing. Depending on your situation, you might focus on specific type of tests for the relaunch while other ones later down the track.
        • Security
        • Performance/Scalability/Volume
        • Wallet functionality on different OS & devices
        • Wallet synching testing
        • Wallet backup functionality
        • Wallet upgrade/new installation etc
        • Miner testing (ASIC, GPU/CPU mining with different # of core/memory setting/disk iops)
        • and so forth
    • I normally use “FEW HICCUPPS” heuristics when I don’t have requirement and I need to do exploratory testing.  Testing all these areas can be time consuming, if done manually so you want to leverage automation. I are in the process of moving towards automation testing so it free us with exploratory testing and other things. Where possible try leverage cloud/virtualization option for testing. For example we use VMWare/Virtualbox/AWS for new wallet installation/upgrade and other functionality testing.

 

I hope this post gives you enough insight into potential challenges you might be dealing with when relaunching a community cryptocurrency.

Is Endurance Testing Dead?

orig

Recently I posted a question on linkedin “Performance Specialists” group regarding soak(/endurance) test and whether it is required, if team wants to achieve daily deployment into production. I have to say it was an interesting discussion with different views. That discussion lead me to write this post about my own thought on the topic. Before we get to that, let’s first define what Endurance test is and its objectives.

 

What is Endurance testing?

Endurance or Soak testing is a type of load test that normally runs for extended period of time (few hours to days) while the system under test is subjected to a production like load(or anticipated load) to understand/validate its performance characteristics.

 

Why conduct Endurance test?

Endurance test is conducted to answer different kinds of questions such as:

 

  • Does my application performance remain consistence or degrade over time? For example, your application might have a resource leak such as memory or connection leak which manifests slowly over time and impacts your system.
  • Is there any interference that was not accounted for and could potentially impact system performance? Example application backups/VM backups/batch jobs/third party jobs running at different time of day that might not have been accounted for in other tests but do impact system performance.
  • Are there any slow manifesting problems that have not been detected by other types of test? Other then the resource leaks, you could also detect problems which are due to sheer volume of data. Example of such an issue is full table scan on the database. As the data grows, the database queries start to slow down because they are doing full table scan. Similarly, running out disk space because too much data is being written to log file which in turn causes the application to crash.

 

Now let’s get back to the question of whether endurance testing is required or not, if you want to achieve daily deployment.

 

My personal view is that there is no simple answer to conducting endurance test and for how long to achieve daily deployments. Would you want to conduct an endurance test that lasts for 12 hours, if you are making a text change? Probably, NOT. What about if there is a rewrite of a function that makes call to database and third party api? Probably, YES.

 

In the end it will all come down to the level of risk that team/stakeholders are willing to take to deploy code into production, standby their decision and how quickly they can mitigate performance issues in production without impacting brand image, sales, user experience and so forth.

 

However, I don’t believe Endurance testing is dead. There is a place for it in the continuous deployment world. It just needs a little bit more thinking and planning (different approach may be), if you want to achieve daily deployments. You can run soak test overnight, analyze and share results in the morning with rest of the team or conduct it over the weekend. Another approach could be that the team reviews which changes they believe are of low risk and therefore best candidate for deployment during the week without requiring an endurance test. While high risk changes undergo endurance testing over the weekend before deploying into production, the following week.

 

Finally there needs to be right monitoring and alerting tools in place to identity performance related issues (be it in production or non prod). Any issue identified in production also need to be fed back to performance engineering team. This will help them improve their performance testing process.

 

 

Reflecting on WOPR26

IMG_3450-min
IMG_35271-min
IMG_3537-min

 

Over the years, I have attended a few testing conferences and Workshop on Performance and Reliability (WOPR) conference stands out for me due to its unique format.

 

The WOPR conference is generally limited to 20-25 seats and lasts for three days. This year the 26th WOPR was held in Melbourne (Australia) and the theme of the conference was “Cognitive Biases in Performance Testing”. We had 16 participants from around the world attending it.

 

A few things about WOPR that stood out for me when compared to other conferences are:

  • Real life experience report
    • Based on the theme, participants present real life experience report to the rest of the group. If you want to learn more about experience report then refer to this link.
    • After my first day, I ended up updating my report. It was written more like a “How To” presentation rather than an experience report. However because of the unique format, I had time to hear others experience report, reflect and update my report before presenting.
    • Some of the biases we discussed and presented at WOPR26 were:
      • Anchoring bias
      • Expectation bias
      • Attentional bias
      • Dunning-Kruger effect
      • Automation bias
      • Confirmation bias
      • Pro-Innovation bias

 

  • Open season and Q&A
    • The experience report is used as a vehicle to stimulate conversation between participants. You get to hear their own real life experiences and also get feedback on your own experience. This tends to lead to more questions, comments and new threads on the topic.
    • There is no end time for Q&A and open season sessions. The session will continue as long as there are questions, new threads and comments.
    • To facilitate Q&A and open season, facilitator uses K-cards. They are Green, Yellow and Pink. This helps the group stay focus on the topics related to the theme of the conference.
      • Green card is for new thread
      • Yellow card is for question/comment (used in same thread)
      • Pink card is for important question (put me on the top of the stack)

 

  • WOPR dinner night
    • It is the highlight of the conference. This is where you get to chill out with rest of the group after long day at the conference. You get to learn something, build new relationships and above all, you get to chill out with like minded people and enjoy wonderful dinner.

 

  • Conference atmosphere & corridor talks
    • During breaks you might have different groups discussion various things, revisit what was discussed during session, catch up with old friends and also introduce yourself. This helps with conference atmosphere because everyone gets comfortable with one other.
    • The conference atmosphere also helped me overcome my public speaking jitters as it was my first time presenting in a peer conference.

 

The attendees at WOPR26 were Paul Holland, Eric Proegler, John Gallagher, Tim Koopmans, Harinder Seera, Andy Lee, Aravind Sridharan, Diane Omuoyo, Scott Stevens, Sean Stolberg, Srivalli Aparna, Derek Mead, Joel Deutscher, Ben Rowan, Stuart Moncrieff and Stephen Townshend.

 

Also would like to thank Stuart Moncrieff for taking pictures and sharing it with us.

Bitcoin Crypto Throughput

bitcoin-slide

You might have heard in news, websites, word of mouth or any other communication channel that the bitcoin transaction rate is somewhere between 3 to 7 transactions/sec.

 

If you are like me, you want to understand where this number come from or how it is calculated. So here it is. Before I get to the calculation, we need to define few terms and they are:

 

Transaction — is a transfer of bitcoin from one address to another. Example, Harinder transferring bitcoin to Jimmy inorder to buy bitcoin ebook.

 

Block — a group of transactions, marked with a timestamp and fingerprint of the previous block.

 

Block chain — is a list of validated blocks, each linked to its predecessor all the way to the genesis block.

 

If you want to remember them easily, just think of them as train, carriage & passengers.

Transactions is your passengers, carriage contain passengers (in this case it is block) and train consists of series of connected carriages (in this case it is blockchain). As shown below.

 

images-3-1
Bitcoin

From the above graph we can see that the block size is getting towards 1MB and average transaction size is hovering around 505bytes. So…

 

How many transactions/block?

Formula: Transactions/block = Block size/average transaction size.

Average transactions/block = 1000000 Bytes/505 Bytes => 1980

 

What is the bitcoin throughput?

Formula: Transactions/sec = Transactions per block/ block time (in seconds)

Note: A new bitcoin block is on average discovered every 10 minutes. Therefore,

Transactions/sec = 1980/(10*60) = 1980/600 = 3.3

 

There you have it, now you know how bitcoin cryptocurrency throughput is calculated.

Probability Distribution Code in JMeter

Skewness:

“In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or undefined.”  (source wikipedia)

 

Exponential:

“In probability theory and statistics, the exponential distribution (also known as negative exponential distribution) is the probability distribution that describes the time between events in a Poisson point process, i.e. a process in which events occur continuously and independently at a constant average rate.” (source wikipedia)

 

When designing application simulation model for performance testing you will come across scenarios that will require you to use different probability distribution to emulate correct production behavior. For example, average # of items per order or think time between pages.

 

The code below is an example of how you can create a skewed or exponential distribution in JMeter.

 

Skewed distribution code

min = VALUE; //update this value to minimum value expected in the distribution
max = VALUE; //update this value to maximum value expected in the distribution
bias = VALUE; //update this to a value against which  the distribution should be biased toward
influence = 1; //[0.0, 1.0] – 1 means 100% influence
rnd = Math.random()*(max-min)+min;
mix = Math.random()*influence;
result = rnd *(1 – mix) + bias * mix;

 

NOTE: The above code is from stackoverflow and I don’t remember the link to it. If you do, please let me know you I can refer to it.

 

Exponential distribution code

Avg = VALUE; //update this value to reflect mean value for the distribution

MIN = VALUE; //update this value to minimum value expected in the distribution

result = (long)(-(double)Avg*Math.log(Math.random()))+MIN;

 

Example (Exponential distribution):
MIN = 1;
Avg = 2.5;
result = (long)(-(double)Avg*Math.log(Math.random()))+MIN;
If the above code is executed for 200 iterations/thread, it will generate the values depicted in the histogram below. More iterations executed, better the distribution will be. For testing, two threads were used.

1-Exp-300x187
NOTE: If you want to have a hard max boundary, add an if condition in the code to check against the MAX value.

 

Example (Skewed distribution):

min = 1;

max = 10;
bias = 3;
influence = 1;
rnd = Math.random()*(max-min)+min;
mix = Math.random()*influence;
result = rnd *(1 – mix) + bias * mix;

If above code is executed for 200 iterations/thread, it will generate values depicted in the histogram below. More iterations executed, better the distribution will look be. For testing, two threads were used.

1-skew-300x197

 

Use beanshell sampler to generate the value and save it in a variable. Pass variable into the loop controller to control it. Below is the code in beanshell sampler.

 

1-Exp-dis-300x183

1-skew-dis-300x247

 

NOTE:

1: Make sure you run a few tests to get the distribution right to reflect what is happening in production.

2: If you have a better code to generate probability distribution be it exponential or any other kind, I would love to  know.

3: Use JSR223 sampler than beanshell sampler. I have noticed that beanshell sampler throughput is less compared to JSR223 sampler.