Changes to our Agile Process

At Thinknear, we have used the Agile methodology since the inception of the company. But a few months ago we made some changes to our process to help improve out team. Some were changes to the parameters of the methodology and some were changes to follow the Agile methodology more strictly. Some of the changes also coincided with our engineering team splitting into two teams, which should give you a sense of our size (~14 engineers now). As your team grows it is vital to adapt your processes. Things that worked with 3 engineers, fall apart at 10 engineers. Things that work at 10 engineers I am sure fall part when you get to 25 engineers. I believe engineers who are adaptable and problem-solvers (not just code problems) should be sought after in startups. It can be overwhelming to engineers who are used to the big company pace.

I wanted to share some of the changes we implemented at Thinknear, and discuss the impact of them.

One Week Iterations

We moved to one week iterations from 2 week iterations. I would recommend this to any group that faces a lot of change or uncertainty (e.g. early startup). It allows your team to evaluate what they are doing much quicker. It is a bit ironic that our engineering org is maturing yet we decided to shorten our sprints.

But I would argue there are a lot of other benefits as well. When you have a week of work, you work even harder to break down your stories into smaller pieces. You only have a week of work, so if you have stories that take you 3-4 days, it feels like you get nothing done. Smaller stories are great for many reasons; they make us feel more productive, they make us more thoughtful in our planning and our estimates are more accurate.

Another benefit is that if some unexpected work comes up that can wait a few days, we don’t have to bother re-arranging the current sprint and working it out with the PM/PO. Because the next sprint is “not that far away”, it gives stakeholders comfort when some of their work cannot be prioritized for the current sprint. This reduces the amount of negotiating that needs to be done.

Drawbacks. It can feel like you have 2x the meetings. This is something we are improving on. Be sure to change the cadence of activities that don’t need to be done weekly (e.g. backlog grooming). The goal is to make the meetings more productive and ideally shorter. If we finish some pre-planning meeting early, we will try to do a high-level and quick pass at what might be in the pipeline two sprints from then. It helps cut-down on future meeting time.

Team Based Sprints

We used to assign stories to each team member before the sprint started. We now prioritize all the work and let team members pick off the work in the order that they are prioritized. If two stories have a strong dependency, the same person will usually take both to improve productivity.

This approach gives the chance for engineers to have some choice in what they work on in each sprint. Many claim the benefit in a team based approach is that team members are more willing to help each other. This was never a problem in our case, so I can’t speak to that. But I do believe it is important to incentivize the right behaviour in a team. Thus, if an engineer has a choice to do their “own” work or help a co-worker with their work, they should be free to help the co-worker in hopes of increasing the velocity of the team in general.

Having engineers select work promotes learning of the systems, since we don’t preselect “the expert” for the story/component/system. I think the biggest positive is the pressure to not let your teammates down since everyone is expected to pull their own share.

I can’t think of any drawbacks of this approach. You might sacrifice some velocity by not having the most knowledgeable (at the time) work on the ticket, but the gain to sharing of knowledge outweighs this drawback IMHO.

Story points vs Time estimates

We decided to move to story points in the estimating of stories. We use the Fibonacci sequence with a maximum story point size of 5. If it is larger, we will break up the story into multiple stories. This change has allowed us to estimate quicker. Measuring relative size is easier for us, instead of thinking of all the components that need to be built for a story and how long each will take. We instead think about questions like:

  • what are the unknowns?
  • have we done this before?
  • how many pieces is this touching?
  • will we need to do any refactoring?

We ask these questions because they usually uncover the hidden complexities.

Everyone will try to relate story points to actual work productivity. It is an urge that you need to fight because you will lose the benefits of using story points. Also at the start of our process, we didn’t know our velocity so we didn’t know what we could accomplish in a sprint. This takes a few weeks to get used to. This approach won’t work well if a team’s membership is highly variable from sprint to sprint, since you need to establish a baseline for work.

Sprint Retrospectives

I am a big believer in retrospectives as long as the focus is on improving (future) instead of blaming (past). If you are a team that is moving fast and constantly dealing with change, retrospectives are a perfect chance to take a breather and reflect.

We started to do sprint retrospectives at the beginning of each sprint planning meeting. This process has helped our improve team communication in various ways. These meetings gives team members a opportunity to bring up anything they liked or disliked during the previous sprint. We record all ideas and form action items for them. The team should agree on the process of dealing with the action items as well.

Some common things we bring up:

  • Did someone do an awesome job at something?
  • Was a particular task unnecessarily painful to do? How can we make it easier? It is worth it to do so?
  • Did we mess up something up? How can we prevent it from happening again?

The list goes on…

Sort of ironically (because people view retros as a finger pointing exercise), I have seen the biggest increase in praise for other team members’ work in these meetings. I don’t think there is enough of positive feedback in engineering organizations in general and this has been a welcome surprise.

It has been all positives on our side. You need to encourage everyone to participate in bringing their ideas to the table. The goal is to have productive discussions. Sometimes people vent through this channel, which isn’t necessarily a bad thing, but you want to keep the goal in mind of improving the team.

An engineering team is like a garden that needs constant maintenance. Retrospectives give us a process around dealing with the changing environment.

Research vs Design vs Implementation

For more complicated stories we have started to create research stories before we start implementation. This approach has allowed us to be more thoughtful in building components. In the old system, research, design and implementation often would be in the same story. This allowed us to move quicker when we were a smaller team. It worked well when our system was simpler. This started to break in our more complex system because the design became a larger chunk of the work and engineers would be prone to rushing through it. It also made it harder to scope these stories because of the hidden complexities that would come out once an engineer would dive into the task.

Separate research and design stories set aside some time for the engineer to properly plan the work. In a startup this might slow you down, but in more complex systems it leads to higher quality work. The engineer doesn’t need to rush through the work. Allocating the time also allows us to document the decisions made and knowledge gained, whereas in the old system it might have been skipped. This is extremely helpful as a future reference. You should aim to still have deliverables for these stories, such as presenting to the team or documentation.

This approach might not work in a very small startup as you need to move quicker. Regardless I always recommend having at least 2 engineers look at a design before implementation starts. The 2nd pair of eyes has saved countless hours implementing subpar solutions.

Those are the bigger changes we have made to our process over the last few months. So far they have been all positive changes. Our process will continue to evolve as we mature. The key is to have everyone on the team thinking about how things could be done better.

A Developer’s Sixth Sense

Most engineers are fact driven by nature. Which is a good thing. I am here to argue that listening to your intuition does serve a purpose and can help you become a better developer…or anything for that matter. We don’t always have all the facts. The best people know how to use their instincts to their advantage. This is something I have been working on over the last year and I still have much room to grow.

Below are 4 areas of software engineering where intuition can help.

1. Task Scoping

You are given a piece of work and told how long it will take. Do you get a “shit…I can’t waste a single second” feeling? Or just a “this is going to be tight”. We can’t as developers be expected to remember how everything works. There will always be times when we forget about a particular use case, which inevitably blows an estimate up. But our gut (or unconscious mind), does surprisingly well at alerting us.

Try to recognize that feeling and ask yourself why you feel that way. Was it because the last time you refactored that class it ended up being a tangled mess? Or was it because the last time you built a component that interacted with components A and B it took much longer than expected. If it makes you anxious or pressured, the task is probably under scoped.

When estimating, listening to your gut can help you surface problems sooner rather than later. Which is what all engineers should strive for – and project managers love! The key is to recognize those feelings.

2. Code Reviews

Code reviews are where I use my intuition the most. At Thinknear, we code review everything and using this sixth sense helps speed up the process for me. Once you start doing code reviews regularly you start to spot “code smells” faster. But many times I will look at a piece of code and say “that looks wrong” or “something here doesn’t look right”. I might not know how to fix it or what is exactly wrong, but my gut has served its purpose – identifying a potential issue.

Some examples of things that I tend to pick up on are:

  • Am I having a hard time reading the code? (better method/variable names?)
  • Why is this method so long? (too many things happening in method)
  • Why are there so many unrelated methods (too many responsibilities of the class)

Those are just a few to give you an idea. Once you found a problematic area, you can move to suggesting fixes.

3. Architecting Solutions

We at Thinknear, collaborate on all major architecture designs. Usually a single engineer is responsible for figuring out the design. But that person is responsible for running the solution by at least one more engineer. I can’t tell you the number of times I came up with a solution and then had a better alternative or alteration suggested by one of my peers.

Intuition also helps in these discussions. This is the place where we all probably use it the most. Someone explains a design that makes us go “eeek!” inside. We explain why it might not be the best idea. But what about those situations where you can’t quite put your finger on it?

Still bring it up.

Many time just bringing up the fact that something feels off, will jog someone’s memory. “Oh yeah, I remember when we created component X we did that and we had to re-write it”. Contrast that queazy feeling with the feeling of “wow that design is slick”. That can help you recognize when you should be pushing back. Or what about that “there is got to be a better way feeling”? Listen to yourself. Bring it up. Give yourself credit for the knowledge you have amassed.

4. Interviews

Your unconscious mind processes a ton of visual cues. And it only brings things to your attention what it thinks you need. Its really amazing. We’ve all heard about the experiments done with dinosaurs walking across television screens while the onlookers were distracted with something else on the screen.

Culture fit is one of the most important factors in hiring a candidate. Yet its hard to gather empirical evidence on how someone might fit in. With such a limited timeframe to evaluate someone, on something so important, it can be difficult. At Thinknear, we always have lunch with the candidate for them to meet the team in a more informal setting. It also gives the candidate time to assess who we are. Yet intuition should be used as another data point in these scenarios.

This is one area I have tried to improve on the most. I have interviewed over 150 people for sure and I can speak from experience that your intuition only gets better with experience when evaluating people. From my experience, gut feeling is grounds for turning down a candidate, but not for accepting a candidate. It can be a useful tool, one of many you should be using when hiring your next co-worker.

Those are the four areas I have seen that intuition helps in improving my work. Sometimes you don’t have all the facts. Use your gut to your advantage. It is a great ally.

One-off Webpages That Make Life Easier

Below is a list of one-off webpages that are extremely useful in my day-to-day life as a developer. I’d love to hear what other pages I am missing out on or if you have better versions of any of these tools.

1. JSON Beautifierhttp://jsonviewer.stack.hu

Simple and clean UI. Handles broken/incomplete JSON very well.

2. Convert Java Date to Millis:  http://www.fileformat.info/tip/java/date2millis.htm

Converts both ways. Nothing special. When I don’t want to load Eclipse.

3. Amazon EC2 Instance Comparisonhttp://www.ec2instances.info

Not sure how Amazon doesn’t have something like this. Chart form. Easy to compare. Keeps up to date as best as I can tell.

4. Ruby Regex Testerhttp://www.rubular.com

Clean UI. Legend at bottom. Able to easy test many inputs.

5. Git Cheatsheethttp://cheat.errtheblog.com/s/git

Commonly used commands with descriptions.

6. AWS Service Health Dashboardhttp://status.aws.amazon.com

Quickly see if a service you depend on is having trouble.

7. AWS Service Pricinghttp://www.cloudomix.com/pricing

Cost of all services and comparison. A lot of information.

8. Apache Hive Functions:  https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF

List of all built-in functions with descriptions.

9. TCP Variables:https://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html

Simple list with descriptions all in one place.

10. Find Ruby Gemshttps://www.ruby-toolbox.com

Quick way to find and compare gems with same purposes.

11. IP Address Lookuphttp://www.maxmind.com/en/geoip_demo

Most reliable and matches most accurately.

12. Colour Palette Searchhttp://www.colourlovers.com/palettes/search

User generated palettes that you can search by color.

13. Online Diagram Creatorhttps://www.draw.io

Great for architecture diagrams.

14. Nerd jokeshttp://devopsreactions.tumblr.com/archive

When you are having a bad day.

That’s all I have. These help me tremendously through my work day and help make my life more productive. What are yours?

Discuss on Hacker News.

What else is running on my EC2 instance?…

Aside

At Thinkear we use a lot of Amazon Webservices. We use Elastic Compute Cloud (EC2) to host our Apache Tomcat server running Java 6. We recently had some major performance issues with our service, which lead us to analyze our EC2 hosts and figure out what was running on it.

At peak hours our servers handle ~35K requests/second and we have a 100 ms SLA to maintain with our partners. In this type of environment performance is the top priority and we were surprised by some of the things we found running on our EC2 instances. I thought I would share what we found. Some of it was surprising.  Some of it was documented after you knew what to look for – but I find the Amazon docs hard to navigate. Throughout our interactions with Amazon support, we found that not all representatives were aware of some of these points below.

For an AMI we have the Amazon Linux  X86 64-bit (ami-e8249881) running a Tomcat 7 Linux Configuration.

1. Apache runs as a proxy.

Our load balancer (AWS Elastic Load Balancing), directs port 80 traffic to port 80 on the hosts. Then Apache runs a proxy that forwards requests from port 80 to port 8080 (default Tomcat port).

Config files are located in /etc/httpd/conf.d and /etc/httpd/conf. We had to tweak settings in /etc/httpd/conf/httpd.conf based on our use case. These settings were the root cause of our issues. We had never looked into them because everything seemed to work.

We tried by-passing Apache because we didn’t need the features it brings. Unfortunately, we had issues with our servers on deployment when we by-passed apache. We haven’t found the root cause of this as of yet.

2. Logrotate Elastic Beanstalk Log Files

elasticbeanstalk.conf in /etc/httpd/conf.d/ defines the ErrorLog and AccessLog properties for Apache. These files are then rotated out by /etc/cron.hourly/logrotate-elasticbeanstalk-httpd. The problem was that we didn’t know these log files existed and we felt the settings were too aggressive for us.

These are our current settings: https://gist.github.com/KamilMroczek/7296477. We changed the size parameter to be 50 MB and to only keep 1 rotated file. Smaller files take less time to compress. We didn’t need all those extra copies.

3. Logrotate Tomcat Log Files

logrotate-elasticbeanstalk in /etc/cron.hourly defines rotating catalina.out and localhost_access_log.txt out of the Tomcat logs directory! As nice as it is for them to do that, we had no idea. It didn’t have a large impact on us, since we handled the rotating of our log files ourselves already at shorter intervals. We ended up removing this unnecessary step anyway.

Original Log rotate script: https://gist.github.com/KamilMroczek/7296539

4. Publishing logs to S3

We noticed that we had CPU spikes at 20 minute intervals on our hosts at 10, 30 and 50 minutes passed the hour. We couldn’t explain these. When we looked at our CPU usage through top we found the culprit.

/etc/cron.d/publish_logs is a python script that publishes:

  • /var/log/httpd/*.gz (#2 above)
  • /var/log/tomcat7/*.gz (#3 above)

I originally thought that we were uploading the same files a ton since the logrotate only rotated every hour and kept the last 9 copies, but the publishing happened 3 times an hour. But we found out that the code has de-duplicating logic.

We removed this cron task because we didn’t need the data uploaded. We already uploaded our tomcat logs separately and the beanstalk logs were of no use to us at the time. Nor have we ever used them to troubleshoot issues.

5. Amazon Host Configuration

The entire configuration for your environment can be found through Amazon Cloud Formation. There is a describe-stacks (or cfn-describe-stacks depending on the CLI version) call that allows you to pull the entire configuration for an environment. We are in the process of auditing ours. More complete instructions are here:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-describing-stacks.html

As will all tough problems, after troubleshooting them you inevitably get a deeper understanding of your system and its architecture. When you own and provision your own servers you understand everything on them because you are responsible for creating the template. When you use a hosted solution such as Amazon Webservices, you can run into the problem of not knowing everything about your image. But we learned that you need to take the time to understand what you are getting.

Setting up EclipseLink MOXy

I wrote earlier how I found that the EclipseLink MOXy library performed great in deserialization of JSON.  I wanted to share a workaround that worked for me that I didn’t find anywhere else.

1. First here is some sample code for doing manual deserialization of JSON using MOXy.

2. My root element got annotated with @XmlRootElement.

3. All my objects were annotated with @XmlAccessorType(XmlAccessType.FIELD). Other types here.

4. All my fields with names that didn’t match how they came in over the wire, I annotated with: @XmlElement(name = “<json key>”). For example, if my POJO attribute name is “personId” but it comes in as “id”, I would annotate like:

@XmlElement(name = "id")
public Integer personId;

5. I removed my annotations and my paramters from my servlets.

6. Many sites tell you to create a jaxb.properties file in the directory where deserialization POJO’s live and add the following text. This tells JAXB which deserialization to use.

javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory

But this didn’t work for me. Instead of creating JAXBContext objects statically using

JAXBContext jc = JAXBContext.newInstance(Request.class);

I decided to generate them using the JAXBContextFactory in code:

JAXBContext jc = org.eclipse.persistence.jaxb.JAXBContextFactory.createContext(new Class[] { Request.class }, null);

And this gave me access to my POJO’s I annotated. Once I figured that out, everything worked.

Some links that I found helpful:

JSON Deserialization and CPU

At Scout Advertising we have built an ad impression buying engine written in Java and running on Apache Tomcat 7.  At peak, our server handles ~10-15K external requests / second.  We are in the process of some major re-architecting to help us scale to 5-50x our current volume.  As part of that effort, we decided to take steps to remove our JSON deserialization away from using Jersey 1.12, which uses Jettison 1.1.

Our long-term goal is to remove our dependency on Jersey so we can explore different web architectures for handling requests.  I was tasked with removing the JSON deserialization step from Jersey and into our own module.

Criteria for new deserialization library

  • deserialize to POJOs (plain old Java object) without too much custom code
  • comparable speed to Jettison 1.1
  • comparable CPU performance to Jettison 1.1

After researching libraries online, the general consensus is that the best JSON libraries for speed are GSON and Jackson:

http://stackoverflow.com/questions/2378402/jackson-vs-gson

http://www.linkedin.com/groups/Can-anyone-recommend-good-Java-50472.S.226644043

There is also a great benchmark for JSON library performance.  It gives you a good sense of the libraries available.  But you should always run benchmarks for your own use case, which I did below.

Try 1 – Jackson is the right man for the job

I decided to go with Jackson 1.x with data-bind.  There is a lot of good documentation and it is widely used.  We already used the library elsewhere in our codebase, so this approach wouldn’t add any more dependencies.  It also has many supporters.  The amount of effort to switch to using Jackson 1.x was minimal.  Mainly involved changing the class annotations on our POJOs.  After a good amount of testing, we released the code and everything was working fine.  We host our bidding engines on AWS and after about a week we realized our servers were running hot (CPU) and we were using ~20% more servers on average (we employ a scaling policy based on CPU).  The increase coincided with the release of the new deserialization code.

After digging through our commits, I was able to prove that the extra processing was coming from using the Jackson 1.x serialization vs Jersey’s Jettison library.

I was able to reproduce the results in our load testing environment.  Perfect!  My load tests showed Jackson was using ~15% more CPU and more memory as well.  Here are the graphs of CPU and memory from VisualVM.

Jersey 1.12 w\ Jettison 1.1

CPU is hovering just below 80%.

Image

Jackson 1.x (with databind)

CPU is hovering between 90-95%.

Image

Try 2: Sequels are always better?

Now that I could reproduce the behaviour, the goal was to try other libraries that would perform better.  I chose Jackson 2.x (with databind) over GSON since the only thing I had to do to switch was including a different library.

But still no luck.  CPU was just as high.

Try 3: EclipseLink MOXy

I stumbled upon MOXy, which also uses the JAXB annotations to build objects.  Getting the code up and running took a little bit of time.  But once I got it working, I proved that MOXy used much less CPU than Jackson, and slightly less than Jettison.  It also didn’t noticeably change our latencies, which was also a requirement.

Image  

I will be writing another post on how I used MOXy, since I had some trouble and no other tutorials that I found had worked for me.

In conclusion, we will be trying MOXy in production.  It provides the speed without blowing out our CPU.  I could’t find anything else on the web that compared CPU performance.  Most benchmarks I found, compared speed.

Everyone will be a programmer one day

We are currently in the first evolution of computer programming.  But we are seeing the beginnings of the second.  The first evolution was based on humans learning to program. Many industries have been revolutionized by technology, many more than once.  But there are still many to go.  Part of the problem is that people in technology are not as heterogeneous as society and that leaves gaps in knowledge.  Over time those gaps grow smaller but it can take awhile.  Being able to program is powerful because it allows you to build solutions for yourself.  Every industry has problems which can be solved or improved by software.

When I was coaching/playing for the MIT Graduate Men’s soccer team I got to meet a lot of really bright people who had the opportunity to travel, study and innovate in important industries.  Many of the post grad students, especially in the science field, had learned to program out of necessity.  Their main response was similar to, “i am so much more productive and/or thorough with my work”.  Most of these guys had learned it very late in their academic career, but I realized that this type of training will make it into many more career tracks.  Steve Jobs once said in an interview that he believed everyone should learn to program.  He believed that programming should be a required skill to learn in school.

Everyone will not become a programmer.  But I believe many more people will have the ability and access to program.  There are many examples today and it is trending upward.  There are high level language frameworks being created to allow non-programmers to program systems.  Services like CodeSchool are enjoying a big bump in traffic from people wanting to learn to program.  It should become a core class in high schools if it hasn’t begun already.  Programming is a versatile tool.  It is not like a mathematical formula that you learn and apply in certain situations.  It is not a war that you learn about and recount later.

Programming is a tool to solve problems.

Don’t get me wrong.  Programming well is hard.  Especially when you are trying to architect large systems.  But in many circumstances that is not needed.  Small programs for repetitive tasks or calculations don’t need much design.  But they can be extremely useful.  Once people in various industries start to learn programming early in their careers, that is when we will see another step function in change in those industries.  The best problems to solve are the ones you have yourself.

Technology has always been about revolutionizing industries.  Many industries come later than others and some have been changed more than once already.  Once we can arm everyone with the proper tools to solve their own problems, is when we will see another large return on investment into programming.

Discuss on Hacker News.

Everyone Should Try a Start-up

Last February I decided to move to LA from Boston for a new job.  Previously, I was working for a medium-sized e-commerce company named Vistaprint.  It was my first job out of university and I spent a great 4.5 years there.  But I decided that I wanted to try my hand at a start-up to see how I would like it.  Most of my knowledge of start-ups at the time, came from Hacker news articles.  I knew it was a lot of work and that I wanted to try it before I “had a family and settled down”. Right around this time, an old University of Waterloo friend, John Hinnegan contacted me about a start-up he co-founder.  To make a long story short I joined his company, ThinkNear, which at the time had 4 employees, was just over a year old and was in the process of pivoting.  After an incredible 8 months we ended up being acquired by our current parent company named Telenav.

People say that you learn most from your failures but you should also learn from your successes. Keep in mind I only worked for a start-up for 8 months.  I also wasn’t a founder of the company and came much later in the process (after A LOT of ups and downs from what I hear).  Here is a list of differences and interesting insights that I came up with after analyzing my brief stint working at a start-up.  This list is meant for people who are debating joining a start-up over some other larger company, not necessarily for people who want to start their own thing.

1. It’s all about doing.

There is so much that needs to get done at a start-up. When I first came in, I was cranking out code about 10-12 solid hours in a day.  I loved it.  There were so much to do and so few bodies and what felt like so little time.  Theoretically, the main idea is what the company was founded on, so it turns all to execution.

2.  You will build NEW things!

This one is for the software people.  At larger companies the % of your development time that you get to develop new features is probably ~30%.  At a start-up it is > 80%.  Factor in the fact that you sit in almost no meetings and the disparity is much larger.  Because you are building new stuff with little oversight, you learn a great deal about how to design software components through trial and error.  If the business grows, you will run into scaling problems and have to re-architect your solutions.  I ballpark that I learned more in the last year than I did in about 3 years at my previous company.  Nothing beats building stuff and seeing it break.  Then iterating on it and improving.  This was my favourite part of working at a startup.

3.  Efficiency in execution.

Time is insanely valuable at small companies.  At larger companies you often have time to plan and architect solutions.  In a start-up, you need to move quick.  You need to always balance the effort involved with achieving an acceptable goal.  You often write code that you know will not scale past 10x the volume or will need to be re-written in 6 months.

4. Jack of all trades.

Developer.  That’s you.  QA Tester.  That’s you.  Release Engineer.  That’s you too.  Bug chaser-downer.  You guessed it.  You will be doing everything.  If you don’t want to do this, then you should go work at a cushy big company.  It’s not glamorous but it will make you a well-rounded developer.

5. Your ideas will be heard needed.

I started this list off with saying you must be a doer.  But you will have plenty chance to dream up ideas to real problems.  During the course of the startup you will encounter so many problems that do not have trivial solutions. You will be tasked to come up with solutions.  Or maybe you are having a problem and you thought of a solution off work hours.  To bring in an idea and say “I have a potential solution to problem X” and your boss says “yeah, lets do that next week” is very empowering.  At a larger company, you more than likely will be told that “it doesn’t fit on the project plan” or “we need talk to X”.

6. Your work will matter.

Big companies spread their risk across many different projects.  That is just smart.  Startups don’t have that luxury.  Not only will you be writing production code right away (a claim that many large companies like to make).  But you will be responsible for major features.  If you bone it, you put a dent in your company.  That’s what makes it more exciting.  You get more responsibility and the satisfaction of doing work that you can see move the needle.  In larger companies, projects can get cancelled and your work is wasted.  No time for waste in startups.

Anyways, this is a short list.  Everyone should try a start-up at least once to experience what it is like.  You only need to be a doer who likes to build new stuff efficiently, while working the entire release cycle with lots of ideas handy.

We Can Do Better Than Capitalism

I am sure I have read somewhere that you shouldn’t write publicly when your emotions are out of whack.  Oops.  I caught up on some of the news this week about the bill (H.R. 933) that passed Congress and it makes me sad.  I don’t like being sad.  Just to be frank, I am not super connected to the political world.  I get 95% of my political news from John Stewart – I frankly don’t have enough time in my life to wade through most of the garbage that comes out of politicians mouths – I can’t blame them, the media scares them into groupthink.  John Stewart hits the tip of the iceberg and if I find something interesting I will read some articles.

So when I heard about the provision that was added to H.R. 933 I got really angry.  If you haven’t read the text it is below.  From section 735:

“In the event that a determination of non-regulated status made pursuant to section 411 of the Plant Protection Act is or has been invalidated or vacated, the Secretary of Agriculture shall, notwithstanding any other provision of law, upon request by a farmer, grower, farm operator, or producer, immediately grant temporary permit(s) or temporary deregulation in part, subject to necessary and appropriate conditions consistent with section 411(a) or 412(c) of the Plant Protection Act, which interim conditions shall authorize the movement, introduction, continued cultivation, commercialization and other specifically enumerated activities and requirements, including measures designed to mitigate or minimize potential adverse environmental effects, if any, relevant to the Secretary’s evaluation of the petition for non-regulated status, while ensuring that growers or other users are able to move, plant, cultivate, introduce into commerce and carry out other authorized activities in a timely manner: Provided, That all such conditions shall be applicable only for the interim period necessary for the Secretary to complete any required analyses or consultations related to the petition for non-regulated status: Provided further, That nothing in this section shall be construed as limiting the Secretary’s authority under section 411, 412 and 414 of the Plant Protection Act.”

Anyways the gist is, if the government finds out that an approval for a genetically modified crop was acquired illegally, the USDA is required to ignore a court’s decision finding the agency approved a crop illegally until it can investigate more thoroughly”.  Isn’t this backwards thinking?  Shouldn’t it be, “lets see if it is healthy before distributing it” instead of “let’s distribute it until we find it is not healthy”.  The US has routinely made big blunders in terms of public health – compare the number of substances banned the US by the FDA in cosmetic products vs in Europe.

Too many of the critics and supporters of the bill are worrying about the wrong parts of the issue. Details that the item was added anonymously and the provision was in there for more than a year.  WHO CARES.  The important part is why did it get into the bill?  For critics, focusing on this insignificant pieces of information gives the supporters rebuttal power that detracts from the actual issues.

I read an article by Jon Entine on Forbes.  His logic could be cut down by anyone with just a little bit of reasoning.  For example,

To date, no court has ever held that a biotechnology crop presents a risk to health, safety or the environment

Since he doesn’t state it, I assume he means that we shouldn’t be worried that any crop ever will harm humans.  Or that they have never done anything wrong, thus they could never possibly in the future.  Both would be ridiculous claims.  Take this scenario: What if a biotechnology company influenced the court’s decision by using money or power…just like Monsanto did by increasing the amount of money it has donated to Roy Blunt, senator from Missouri and the man who helped insert the provision.  I am not saying that is true, but rather it is not that large of a stretch.

Too many people get stuck in the weeds and can’t see the overall big picture.  Why would we subvert the judicial system’s power for big business?  The more rulings I hear congress/supreme court, the more I am saddened.  This reminds me of the “Corporations are people” debate.  How could anyone not logically conclude that if you do not cap campaign donations by corporations, then politicians will be controlled by people with the most money (e.g. corporations).

Many people in the US believe capitalism is the perfect system.  They are wrong.  It is the best we have so far.  One major downfall is that money is power and it rules all.  Money has more influence than government (hence why senators take money and vote “on behalf” of companies).