Ketan's Home

August 31, 2017

Test Coverage – going from bad to good, and knowing when to do it

Filed under: Uncategorized — ketan @ 12:14 AM

Having production code with good test coverage is very beneficial to a development team.
Broad coverage can catch bugs and unintended behavior from entering the codebase, and everyone can sleep better at night, but when a code base doesn’t have much coverage (or any at all), it can be really difficult to introduce it.
We’ll discuss when you should introduce or increase test coverage, and go in depth into the ways in which it’s beneficial for you and your team.

Coverage for legacy codebases?

As a huge believer in tests, I make a concerted effort to add coverage to every project I work on.
Realistically though, there are times where tests are just not that beneficial.
For instance, when dealing with a legacy codebase (an internal API that is rarely updated), adding coverage offers diminished returns.
If your legacy codebase works reliably in production with little to no regression, adding tests retroactively may simply work to verify what is already validated in production.
Often, the legacy parts of an application serve as dependencies, so I suggest focusing your efforts toward testing the new API leveraging the old code.

Still unsure of the benefits to coverage?

Here are some benefits to increased test coverage:

  • Ensure your code works as expected
  • Deploy code with greater confidence
  • Decrease the likelihood of 1am phone calls

Ensure your code works as expected

Adding tests encourages you to conceptualize how you want your program to behave and account for edge cases.

Deploy code with greater confidence

The greater your coverage, the more confident you can be that new features did not introduce regressions.
In the event that deployment introduces a bug, tests can serve as documentation to both track down the bug and squash it.

Decrease the likelihood of 1am phone calls

Pager duty is not fun. Seriously.
Nor is it fun for the person calling you.
Increased coverage will help prevent bugs that have the habit of waking you up in the middle of the night.
Future you would appreciate that.

Okay, I’m sold! How do I start adding coverage?

Once your team is on board with increasing test coverage, I suggest first writing test for new feature work.
That way, new changes to the codebase are covered, and good habits will start to form amongst team members (and in turn, new hires).

Your next priority should be adding coverage for existing code (non-legacy), but this must be done much more carefully.
There are generally two ways to go about it:

  • Scrap the existing implementation, one discrete module at a time, and test drive the recreation of its code
  • Add tests to validate the existing code.

Scrap & test drive

Pros:

  • Scrapping and test driving will likely result in better design and code clarity.
  • The tests will serve as a form of documentation for your teammates and future self.

Cons:

  • Throwing out existing code can result in you missing a particular business case that wasn’t clear within the implementation.

Add coverage to pre-existing code

Pros:

  • Generally a quicker process than scrapping and test driving
  • Coverage of an existing API can make refactoring easier

Cons:

  • May work to simply validate poor design.

Side effects

Testing certainly has a learning curve, so your team may spend up-front time figuring out how to best test new and old code.
This can result in a decrease in sprint velocity, and may require more complexity points in story estimation.
That said, adding tests to new features will reduce bugs, and in turn, decrease the cost of having to squash them later,
and adding coverage to existing code will pay down technical debt that can prove crippling if left unpaid.

Summary

To wrap up, increasing test coverage will improve design, increase team confidence and cohesion, and decrease the stress associated with regression.

Senior, Lead, or Principal developer in NYC? Stride is hiring! Want to level up your tech team? See how we do it! www.stridenyc.com

from The Practical Dev http://ift.tt/2vEYbfX

August 30, 2017

Why there is no way around test automation (except one)

Filed under: Uncategorized — ketan @ 9:48 PM

Right now, manual testers in software quality assurance basically fight a lost cause. With every sprint and every iteration, the number of features increases as the software grows, because the work of developers (ideally) adds up.

Software “grows” as the number of features increases

But the work of manual testers doesn’t add up. New features have to be tested as they are introduced. This part of the work scales with the amount of developers. But software can break in interesting ways. So all features should be tested before a release, including existing ones. If a team has a fixed number of developers and testers, over time the testers are bound to fall behind.

There are only three possible solutions:

  1. Add more manual testers as the software grows
  2. Let users find your bugs
  3. Automate tests

Adding more manual testers

Adding more manual testers means to increase the cost. More people in a team do not scale linearly, as the overhead for organization and communication increases. So even with unlimited budget (which most companies don’t have) this is only a limited option. And even if it was, there are many reasons, why doing manual regression testing is a bad idea.

Let users find your bugs

Accepting the risk by testing only a small sample of the features or testing them insufficiently is only acceptable in certain situations. Google and Facebook do not implement life-critical software, so showing an error to some small percentage of their users is a viable solution — for them. This is what let’s them implement continuous delivery. But this possibility tends to be the exception. If your software is installed locally or is critical in any way, this is not an option.

Automate tests

So eventually you end up in the situation that you have to automate the tests in order to deal with this inequality. This is probably the reason why test automation has seen such a boom in recent years, and why test automation engineers are in such demand right now.

There is a huge and recurring discussion between testers, whether test automation will eventually replace human testers. Whoever thinks that got both the reason for test automation and the capabilities of test automation wrong. You have to automate tests. But not in order to replace testers. You have to automate tests in order to enable testers to do their job: proper testing of new features.

Test automation is despised by some. Or at least it appears that way. They repeat all over again, that test automation is not automated testing. And that test automation is of very limited value. I understand where this comes from and why they stress this point so much. I also understand, that

a test tool is not a strategy; test automation is a development process; test automation is a significant investment; and test automation projects require skills in programming, testing, and project management.

While all of that is true, it mostly refers to specific tools and specific experiences. And it doesn’t address the issue I stress here. I acknowledge that most of today’s test automation tools are far from perfect, but that is another discussion.

What test automation can’t do

Test automation is very much comparable to a version control system. It highlights changes of the behavior of the system and asks the user to verify or undo those changes. Thus it cannot find bugs that existed at the time of the creation of the test (historical bugs). Because these kinds of bugs are already baked into the tests. Hard-to-change existing tests can even be detrimental, as they enforce defective behavior.

Test automation can only find new bugs in old functionality and only a specific kind of such bugs. It can only find bugs that manifest in changed behavior. It cannot reason or understand the software, so test automation doesn’t find if the system becomes inconsistent. It does not find functionality that should have changed to preserve consistency. So even when using test automation, this is not a no-brainer. You still have to manually test and review old functionality, to make sure it stays consistent with the overall system.

Of course, testing exhaustively or testing everything is impossible. I would argue that testing is a risk–cost calculation. How much risk are you willing to take, or as others have put it “How long do you look into the rearview mirror”? So the number of tests to automate is always a cost-function.

What test automation can do

This is what test automation really does, albeit sub-optimal: it highlights changes.

This is what test automation really does, albeit sub-optimal: it highlights changes.

Test automation helps to detect when functionality that once worked (and was tested and approved) ceases to work. In other words, test automation is a way to help you find unwanted changes to the behavior of the system under test. These unwanted changes are also called side-effects or regressions. Seen that way, regression testing and test automation are version control in disguise, i.e. version control of the behavior of the software.

Test automation is no silver bullet, but it can be of help. Test automation is a tool, a support, a utility function to help testers with what would otherwise be a sheer unmountable amount of effort.

TL;DR: With ever growing software, there is no way around test automation, unless you want your users to find your bugs for you. But this is meant to enable testers to do their real work: critically challenge the system instead of becoming routine-blinded.

This post has also been published on medium. If you liked it, please press heart, twitter or otherwise spread the word.

from The Practical Dev http://ift.tt/2xyopCZ

Control Thy LED

Filed under: Uncategorized — ketan @ 7:41 AM

In a previous article, I discussed LEDs in general and their properties. In this write-up, I want to give some examples of driving LEDs and comparing a few of the most commonly used methods. There is no “one size fits all” but I will try and generalize as much as possible. The idea is to be able to effectively control the brightness of the LED and prolong their life while doing it. An efficient driver can make all the difference if you plan to deploy them for the long-haul. Let’s take a look at the problem and then discuss the solutions.

The Problem of LED brightness control

Most newbies will be interested in making an LED glow without blowing it up. A little further down the line, it comes down to brightness control and then mixing of colors to produce any shade from the color picker. In any case, it is essential to have a clear understanding of the end application. A lighting application such as a work bench light will seldom require a romantic mood light control. On the contrary, a disco light will require fluctuating intensities of various colored LEDs.

So how is brightness perceived? Logically speaking, when you have two LEDs lamps of 100 lumens each, the result should be double the brightness. In reality, human eyes are logarithmically sensitive to intensity change which means that doubling the intensity will be perceived as a small change.

Perception of light intensity follows Stevens’ Power Law with an exponent that depends upon the amount of your field of view occupied by the light. For a 5 degree spot the exponent is about 0.33 but for a point source, it is about 0.5. This means that for a 5-degree spot the source needs to increase by a factor of 8 to seem twice as bright and a point source, needs to increase by a factor of 4 to seem twice as bright.

Let us start with a simple 1 W SMD LED like the one available from Adafruit. This one is rated at 90 Lumens and comes with an aluminum PCB as a heat sink. Here is a quick look at some of the parameters of for the LED.

The datasheet has some pretty important information starting with forward current(continuous) and peak forward current. The values are 350 mA and 500 mA respectively and should not be exceeded.

Two more important pieces of information are used which are represented as graphs. The first is the forward current and voltage graph which shows that a voltage of around 1.8 V is enough to forward bias the LED. The current rises ohmically after that and at around 3 V, it is reported to draw around 200 mA. The second curve is the relative LI vs forward current which shows that the current controls the amount of light output (the straight line stretching up to the “4” mark).

Given that the LED follows Ohm’s Law, the current should be directly proportional to the voltage and hence we can vary the voltage to control the brightness. Well, there is just one small hiccup that the curve of the forward current is so steep that a small increment in voltage will have a larger change in voltage. The brightness will be different if you connect a coin cell as opposed to two alkaline batteries. Both have a 3 V potential difference but the amount of current supplied by either is different and consequently, the brightness is different. Rather than control the voltage, it’s better to control the current passing through the LED directly.

The Simple Approach

The easiest thing to do is add a potentiometer in series with the LED. Simple! Essentially when you vary the resistance, Ohm’s Law kicks in and voila! Variable resistance equals variable current equals variable brightness.


Here is a simulation of an LED with a variable resistor varying from 100 ohms to 1 kilohm. The only problem is that if the resistance of the LED changes or the voltage fluctuates, the result may be devastating. This is essentially an open-loop control and there is no feedback from the circuit to the user other than varying brightness.

Of course, there is also the issue of efficiency since the there will be power dissipated by the potentiometer as well.

Current Control

Next easiest is to create a constant current circuit. There are a number of ways to create a simple constant current source and I highly recommend going through the book, “Art of Electronics” for a detailed explanation of the same. Unsurprisingly there is a Wikipedia article on the subject as well.

You could use a classic LM317 variable voltage regulator to provide a small constant current. It is not very efficient since there is there is a lot of heat dissipated at the adjustment resistor at higher currents.

The better method is to use a closed-loop circuit that provides analog feedback to inhibit excessive currents and compensate for variations in the load. The circuit shown is a simple current limiter and is recommended since it offers a higher efficiency than other transistor circuits.

It works to limit the current through R_sense such that the drop across it is no more than 0.6 V. If that happens, Q2 switches ON and Q1 will be switched OFF which limits the current through R_load which in our case will be an LED. Adjusting R_sense using Ohm’s Law we can adjust the maximum current thought our LED.

I personally prefer the above circuit with Q1 replaced with a MOSFET however in cases where we want to control the brightness digitally the next method would be a much better fit.

The Digital Method

The next circuit involves the use of a set of pulses to switch ON and OFF the current through the LED. It’s like flicking the power switch quickly enough that it seems like the light is dimmed. Commonly known as PWM or Pulse Width Modulation, a series of pulses with variable duty cycles or ON and OFF times can be employed for the task.

Under this topic, there are two parts to be discussed. The first is the switching source which can be a simple oscillator or a microcontroller. The second is the switch itself which will be the driving stage of this design. Let us take a look at both in brief.

The PWM source

For generating the pulses, the humble 555 is a good choice. The circuit shows a simple PWM circuit with T1 being the switching element.

For generating the pulses, the humble 555 is a good choice the circuit below shows a simple PWM circuit with T1 being the switching element.

At this point, we have a number of options and questions to be answered.

1. What is the correct frequency for the PWM?

2. How do I know the amount of current being supplied and

3. How does all this affect the brightness?

The frequency of the PWM effects the flicker perceived. A simple example is when recording digital video if you use NTSC in a 60 Hz lighting environment, your camera will pick up a lot of flicker and switching to PAL will help a lot. For PAL it is 50 Hz so try it out right now with your web cam and see the effect.

The idea is that higher switching frequencies are better but you cannot go arbitrarily high. Remember, all LEDs have a turn-on time which is required for it to switch on and start glowing. If you switch too fast, the LED just won’t turn ON. Another consequence is that the frequency has an effect on the efficiency of the switching element and we will touch on that in a moment. Right now we need to figure out the best frequency for our LED. Scroll back up and check out the last entry in the data sheet snippet.

It says 1 KHz which is what the manufacturer recommends and in most cases this information will be provided in the data sheet itself. If not then anything above 500 Hz should be usable. Check out this link for an application on dimming LEDs.

Since this technique allows for a digital control over the current, ergo the brightness, the next step would be to figure out a way to control the brightness. Remember, the LI is directly proportional to the current but perceived brightness is logarithmic. We need to translate the linear stepping input into a logarithmic current variation.

When using microcontrollers or even FPGAs, the answer is very simple – loookup tables! Have a list of PWM duty cycles that correspond to a sequence of perceived brightness values. A great example I have to mention is here, where the designer uses an FPGA to create a log LUT to generate a linear PLI from user inputs. The same lookup table can be used with an Arduino and I strongly encourage you to try.

Personal Note: When LEDs appeared initially, one of the problems that we faced was that the LED drivers that came with the lamps would malfunction. I initially designed a small circuit to limit the current along with a thermistor to shut down the LED if the switching element overheated. Eventually, dedicated solutions started coming up which we will take a look at in a proceeding sections.

Let’s Switch: MOSFET vs BJT

The second item on the menu is the actual switching element. You can use a BJT or a FET or a MOSFET depending upon your budget and state of mind. BJTs are simpler creatures and require very few additional components. A 2N2222 can safely deal with 800 mA of current which is good for many applications.

MOSFETS, on the other hand, are more demanding in terms of components and require a little bit of care to deploy. In exchange, they offer a far less ON resistance of the order of milliohms and a higher efficiency. Let’s take a look at both.

The BJT LED Driver

Here’s the simplest BJT LED Driver circuit. It consists of a transistor connected in common emitter configuration. The transistor gets switched on when the input switch is closed which allows for current to flow from the LED to the ground terminal. The resistance is calculated as

r0 = (Va+Vce) / Ic where Va is early voltage.

This is not constant and varies with the operating point of the transistor and under saturation condition is of the order of a few ohms. The power dissipation is insignificant for a few milliamps but quickly becomes a problem for larger current draws.

I refer you to a video post by [Dave Jones] of the EEVBlog where he uses a BD136 and a 555 to vary the brightness of LEDs on a piece of equipment. This works for loads with lower wattages however if you are looking to drive larger LEDs then expect to add some pretty hefty heatsinks.

MOSFETs are an LED’s Best Friend

A MOSFET has a very low ON resistance of the order of a few milliohms which means that in such a state, it will dissipate very small amounts of heat as per P = I2R.

Since these are voltage driven devices and have very high input impedances, we can safely parallel together a bunch of them. Unfortunately, these are also susceptible to false turn-on events hence for switching applications, circuits must be carefully designed. A more detailed explanation is available here for the interested however for this writing, we will continue with a general case.

Designing a Lamp

I recently bought two no-brand LED panels from a local hardware shop. The seller told me that I should connect them to a 12 V source and they will work. When I chained them together and connected them to a bench power supply, I found that at 12 volts, they can draw up to 2.7 amps! The brightness is frightening at close range and I need to control their brightness.

The next step is figuring out the MOSFET that will be the best fit. Considering overshoots while switching, I would like to go for a 20 V or even a 30 V Drain-Source voltage device to be on the safe side. As for the current, if I intend to pass around 5 amps of peak currents, a Res(ON) of 0.1 ohms would mean 2.5 watts! In such a case my heatsink cost would greatly affect my final product. Instead, I would like something with a fraction of the ON resistance- like 0.01 ohms or less, especially for SMD devices.

Next, I intend to switch the MOSFET with either a 555 or an Arduino. This translates to 5 V Vgs and so Logic Level MOSFETs are preferred; though I will be driving the LEDs with a 12 volt supply hence I could use a transistor or dedicated MOSFET driver. Without it, the effective resistance would be higher but it is worth a try none the less.

I am also tempted to look at the PH2520U and the now obsolete MTP3055VL which is a Logic Level MOSFET. The MTP3055VL has a relatively high ON resistance and can be turned on with 5.0 Volts at the expense of 0.18 Ohms and a lot of power dissipated.


The IRF530, IRF540, IRFZ44N, and AO3400A are all good choices since I have a couple of them in stock. Using an IRFZ44N, I made a simple LED driver and used an Arduino Uno directly. Remember the Arduino pins go up to 5V and I used the fade example that generates PWM out of the box. The frequency of the PWM signal is 490 Hz which is pretty decent.

The result is an effective dimming of the panel. However taking a closer look at the waveform, we see that the output has a significant rise time with a single LED Panel.

This is due to the capacitive parasitics as well as a weak current drive and can be remedied by adding a transistor driver stage. This TI application report (PDF) documents gate driver circuits pretty well with reference to a non-inverting bipolar totem-pole driver which has been studied in detail by [Joost Yervante Damad]. Since our switching frequency is in the lower range, these switching losses are insignificant. If we were to switch in the kHz range or MHz range, these parasitics would quickly be the death of our prototype.

In my case, I proceeded with no driving stage but then modified the code for 75% duty cycle and measured the current draw with a varying value of PWM. Turns out it sucks up a little short of 1 A of peak current. The MOSFET did not heat to the extent where it would require a heatsink, so the circuit is usable as is for this LED panel as well. I can proceed to make a PCB for my little lamp, however, there is one more option I would like to take a look at.

LED Drivers

Dedicated LED driver chips enable you to control LEDs effectively without having to think about all the parameters. A good example is the TPS92512 which allows for control of high brightness LEDs using PWM which is internally controlled. Current control is implemented internally and external signals including PWM as well as analog signals can be used to control the brightness linearly. No need for lookup tables.

I wired up a test board with the same LED panel such that the brightness is controlled using the IADJ pin. A simple preset was used to vary the voltage between 0.8 and 1.8 volts at the desired pin. The output is a clean and efficient varying voltage which is filtered by an output stage cap.

The PWM frequency is around 580 kHz when probed between the inductor. I could not see any oscillations at the output LED pins though which means the filter stage does the job effectively. I created a DIY version of the PCB in Autodesk Eagle  (GitHub) which you can download to make your own.



There is a little OSHPark purple in there and I hope to solder it up myself. Looking at the size of the pins it should be a fun exercise. Let me know if you make one yourself.

Conclusion

So how do you drive an LED? The answer lies in your application area. For small LED current draws, BJTs are simpler and the least expensive. For medium current draws, MOSFETs are a better fit and if you want solutions that offer great out-of-the-box experiences, dedicated driver chips are the way to go. As for me, I have a lamp to finish which will use the middle road since it worked out in my tests. If I ever come to the point where I see flicker in my videos, then the TPS92512 solution will come in pretty handy. I am sure you have a solution of your own and the best way to share it would be a project on Hackaday.io. Go ahead, make a little lamp with light as white as snow and share with us your story.

Filed under: Featured, Original Art

from Hack a Day http://ift.tt/2wf3XYV

August 28, 2017

How to Level up as a Developer w/ Scott Hanselman

Filed under: Uncategorized — ketan @ 12:30 AM

Let’s listen to this talk as a community and engage in a constructive conversation from the POV of our industry and experiences. Feel free to share meaningful take-aways, raise questions (that folks in the DEV community might be able to help clarify), or state areas where we may disagree with the speaker’s POV in the comments.

About Our Speaker

Scott Hanselman has been programing since the 80’s. He blogs, has multiple podcasts, is an evangelist for Microsoft, a health hacker, a master of the Twitter-verse and speaks around the world. I met him at the 2017 Codeland conference. We chatted forever about biohacking diabetes, African American hair care, social justice, languages, and the difference between selling out vs scaling yourself.

He is very much who I want to be when I grow up as technologist: funny, candid, dynamic, incredibly talented, wears all the cool hats, and is just a joy to be around. After meeting him, my friend Chanice and I both secretly adopted him as our code dad and started absorbing his blogs posts, talks, and podcasts as inspiration and instruction in how to scale ourselves as developers. This talk has changed how I think about development and is definitely worth a listen or 3.

Talk Summary:

In this talk Scott explores the cycle of fear and distraction that keeps us from our potential. That the fear of appearing to be a "phony" by not being good enough, pushing production ready code fast enough, smart enough, up-to-date on all happenings, and masters of everything weight that we put upon ourselves causes us to divide our focus and energy (both of which are limited). We tend to try to combat those fears, by hoping we can catch up after hours, but hope is not a strategy. To scale ourselves as developers we need to be able to differentiate between effectiveness (doing right things) vs efficiency (doing things right). The example is given that the first is choosing the right direction to run and the second is running in that direction as fast as you can.

What I Learned:

Becoming a more productive developer starts with choosing how you are not spending your time, since the less things you do the more you can do of them. That it’s sabotaging your productivity to check email in the morning or on weekends ( responding will trigger a response) and you shouldn’t put energy into things you don’t want more of, because through those actions you teach people how to treat you. Incoming information or requests need to be triaged (sorted by priority) based on the three fold nature of work(work you planned to do, work that appears unexpectedly during your day, and work that defines who you are as a developer) then time-boxed, addressed, then you move on.

That optimizing yourself in the end comes down to embracing flow and only allowing yourself to be wrapped up in the pursuits most meaningful to you (important but not urgent) first. Interruptions that cause a context shift are costly in terms of your time and energy. And fascinatingly, that important info/tasks will find their way to you many times while you do this and sometimes dropping the ball is the right answer.

Concepts expressed that stood out the most to me:

1) That there are only so many keystrokes in our hands, so we shouldn’t waste them.
Whenever we are engaged privately for our professional opinion or guidance in written form that would take more than a paragraph that, we should instead document it in a way that can be readily shared with more than just one person and send them a link to the wiki, blogpost, or article. It’s been said that you should write, engage on social media, and give talks about the things you want to be paid to do. I think this is an amazing optimization of our time and energy, encourages us to write on those areas mentioned above, can be referenced many times, and helps build our brand as being knowledgeable on that subject as well as invites learning opportunities from the community if it’s an area where we are confused.

2) There were many mentions of great methods, some of which I’ve tried before but many I would have never dared to consider before.
I’ve heard of, and tried, the Pomodoro technique but never attempted tracking my internal and external distractions during the focus periods. I’ve been trying it this way for the past 2 days and it has made such a difference.

I also have never thought of declaring email bankruptcy before. Moving all the emails that are older than a week into a different folder called "Not My Inbox" (because your inbox should be a place only for recent communications/info you’ve yet to process) in order to free yourself from the psychological drain of an endless "to be processed" environment.

3) Getting things done using the "Rule of Three" to create personal sprints for yourself.

Scott advised that looking at a disheartening long list of potential things to do is paralyzing to growth. Instead we should create our own sprint for ourselves to level up. Picking only three things to get done today and try to get them done, they build into three larger things to accomplish that week, which build into three overall goals for the month, then year. He invites us to envision what it would take for us to not feel guilty or like a phony for a day, week, month, or year? And use that vision to set goals that we then break into actionable chunks. Try figuring out your vision on Monday of what a great week would look like and then reflect on Friday on how it went as well as areas for improvement.

I’ve used a three goals a day system to focus my efforts previously ( focusing on 2 that were urgent and one that was important to the bigger picture), but never anything like this.

Final thoughts:

I am so pumped to use the nuggets of wisdom shared in this talk by proactively putting more of these practices into my daily habits immediately. I hope you enjoyed the talk and were able to take something away from it as well!
excited smiling emoji wearing glasses

from The Practical Dev http://ift.tt/2iuC2j2

August 27, 2017

The best cold-brew coffee maker

Filed under: Uncategorized — ketan @ 5:28 PM

By Nick Guy and Kevin Purdy

This post was done in partnership with The Sweethome, a buyer’s guide to the best homewares. When readers choose to buy The Sweethome’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.

Over the past two years, we’ve looked at 15 cold-brew coffee makers, analyzed dozens of at-home brewing methods and recipes, made concentrate for more than 300 cups of coffee, and served samples to a tasting panel that included expert baristas. And after all our testing, we found that the OXO Good Grips Cold Brew Coffee Maker offers the best way to make smooth, delicious iced coffee at home. It’s easy to use and well-designed, and in our tests it made cold coffee with balanced acidity, a stronger aroma, and a cleaner finish.

Why cold brew

Cold brewing makes better iced coffee than refrigerating hot-brewed coffee. When you add hot-brewed coffee to ice, it slowly dilutes, resulting in a weaker-tasting beverage. Cold brew, which generally starts from a concentrate, is meant to be watered down, so adding ice, milk/cream, and not too much water provides a stronger, more flavorful drink. Additionally, brewing with slow, cold exposure, instead of heat, extracts fewer bitter flavors, so you’ll get a sweeter, milder-tasting coffee that’s better for drinking cold.

Every cold-brew coffee method works the same way: Start with a lot of ground coffee (more than you’d typically use to brew drip coffee), add water, let the mixture sit for eight to 24 hours, and then filter it. What’s left is either ready to drink or, more often, a concentrate that you should dilute with water or milk.

How we picked and tested

We had panelists do a blind tasting of six cups of cold-brew coffee. Photo: Kimber Streams

We began our research by reading editorial reviews from Cook’s Illustrated (subscription required) and the Wall Street Journal, plus one from Stumptown Coffee Roasters. These reviews helped us determine which cold-brew systems to call in for testing. We first tested each system by following the provided instructions to brew a batch of Trader Joe’s Kenya AA Coffee, noting any inconsistencies or complexities and how easy each system was to clean.

Our next step was a tasting panel. For this round we switched to an upscale bean—Joe Bean Mexico Chiapas—and brewed a fresh batch with each system according to the included instructions, which varied in the ratio of water to bean. We invited coffee professionals, coffee enthusiasts/nerds, and casual cold-brew drinkers to rank each sample on a 1-to-10 scale for taste, acidity, and body, as well as to note which was their favorite and why.

Finally, we conducted a second round of taste testing on our top models. We used medium-roast grocery-store beans from Wegmans brewed with a consistent water-to-coffee ratio (4.5-to-1), averaged from all three brewers’ instructions. We let the samples brew for 24 hours each, diluted the concentrate 3-to-1 and had a coworker serve us blind samples. This round’s results closely matched our original testing panel’s findings. It’s the brewers themselves, not their recipes, that make different cold-brewed coffee.

Our pick: OXO Good Grips Cold Brew Coffee Maker

The OXO brewer makes flavorful, money-saving concentrate, looks good on a counter, and is easier to use and store than any other pick. Photo: Michael Hession

The OXO Good Grips Cold Brew Coffee Maker is the best-looking unit we tried, and it has the most thoughtful features for brewing and storing your coffee. It’s easier to clean and store than the other cold-brew makers we tested, and it brings out more flavors from your coffee than other brewers we tried. It uses a metal mesh filter, instead of paper, which seemed to accentuate the flavors. Whether our panel actually liked that bigger flavor is a different story. In our first tasting, two panelists deemed it their favorite, but our coffee professionals were unimpressed and rated it the highest in detected acidity. Evaluating coffee is, of course, a subjective thing, but the OXO model seems to create a brew that was more "punchy" than the mellow, smooth Filtron brew.

The OXO brewer’s vessel is more helpful than most, too. It has extensive volume markings to help you measure water for brewing. It looks nicer than a plastic carafe or a blue-hued jug (as on the Filtron and Bod brewers), has a pouring spout, and fits better on a refrigerator shelf than the tall jug provided with many competing models.

We did experience one snag in brewing with the OXO unit using one of the optional paper filters, which can be placed above the reusable mesh filter to create a smoother brew. In our tests, a silty mudflat of ground coffee accumulated on the filter and blocked the flow of concentrate. An OXO representative and a coworker who uses the OXO brewer both said they’d never experienced a complete blockage when using paper filters.

Runner-up: Filtron Cold Water Coffee Concentrate Brewer

The Filtron Cold Water Coffee Concentrate Brewer is a little cumbersome looking, but it made great-tasting concentrate. Photo: Michael Hession

The Filtron Cold Water Coffee Concentrate Brewer consistently produced great-tasting coffee concentrate in all our tests, with most taste testers ranking its brew first or second. It’s not quite as easy to set up and drain as the OXO, but still simple, compared with nearly every other model we tested. The resulting concentrate costs less per cup than that of any other maker we tried (if you use the default recipe). And although the Filtron doesn’t look as stylish or pack away as neatly as our other pick, its black plastic is less likely than the white Toddy or the clear-plastic OXO to show coffee stains over time.

The Filtron produced a smooth, mellow cup of coffee every time, regardless of the beans we used. Five of the six tasters in our first-year panel gave the Filtron cup their highest rating for flavor, and three named it their favorite overall. It also received almost no dings for acidity, strength or weakness, or body. Only one barista (who generally disliked the lot) found it to have a "short, ashy aftertaste." In our most recent tests, it was the runner-up in taste tests, making coffee tasting flatter and more typically coffee-like than the brighter, more exciting brews of the OXO.

Compared with the other methods we tested, the Filtron system is far easier to set up and empty out. (We still found the OXO easier, though). A felt filter and a rubber stopper fit into the bottom of a black plastic bucket with a handle, and an optional (but recommended) paper filter holds the grounds and water. Cleaning it means either plucking out a filter full of grounds or scooping and rinsing the bucket. After rinsing the felt filter, you store it in water in an included container in the fridge to prevent mold. That potential for molding is one of the biggest drawbacks; it’s easy to forget to store the filter properly. The Filtron’s large paper filters make its brews smoother, but are hard to find—they’re not commonly stocked at stores and are held in limited supply on Amazon.

This guide may have been updated by The Sweethome. To see the current recommendation, please go here.

Note from The Sweethome: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

from Engadget http://ift.tt/2iyLbXP

DIY Mocha Cooker

Filed under: Uncategorized — ketan @ 9:51 AM

DIY Mocha Cooker

Food-grade 3D printing filament is on the rise and it is nice to have a custom coffee mug in the office to instill a little envy in the locals. [Stefan] took it upon himself to create a Mocha Machine that he would 3D print and test the boundaries of his filament.

[Stefan] used Fusion 360 to replicate the famous Bialetti Moka Express pot in it true octagonal shape. Since the pot brews coffee under pressure, he tested tolerances in Fusion 360 to make sure all the thicknesses were right. While the design was being printed, a steel washer was added to facilitate induction heating since you can’t really put a plastic pot over a flame. The print uses Formfutura Volcano PLA which is annealed for an hour at 110 degrees Celsius.

Below is a video of the whole process and though the material may not be food grade, the project is definitely a step in the right direction. Since the printed parts can withstand temperatures of up to 160 degrees Celsius, projects that involve boiling water or experiments with crystallization can benefit from a custom design.

We really hope to see more projects that use this technique, however, for those looking at their coffee machine right now, take a look at more coffee machine hacks as well as alarm clock hacks to get the coffee brewing in the morning. 

VIDEO

Posted in 3d Printer hacks, cooking hacksTagged , , , ,

from Hack a Day http://ift.tt/2wSdLcb

August 25, 2017

Fully 3D-Printed Rubik’s Cube Solving Robot Powered by Raspberry Pi!

Filed under: Uncategorized — ketan @ 11:11 PM

August 25, 2017 AT 1:00 pm

Fully 3D-Printed Rubik’s Cube Solving Robot Powered by Raspberry Pi!

VIDEO

This 3D printed Raspberry pi project was shared on otvinta!

This 3D-printed Raspberry Pi-powered Rubik’s Cube solving robot has everything any serious robot does — arms, servos, gears, vision, artificial intelligence and a task to complete. If you want to introduce robotics to your kids or your students, this is the perfect machine for it.

This one-eyed four-armed giant stands 35cm (14″) tall. 70 hours of print time and an entire spool of filament are needed to print it, not to mention over $200 worth of hardware, but once fully operational, it will surely wow your friends and neighbors. Scramble your Rubik’s cube, place it in the robot’s grippers, press a button, and then sit back and watch this amazingly smart and stunningly beautiful machine put it back together. Watch the video below!

This robot is fully 3D-printable. Other that the servos, servo horns, camera, electronics and a few dozen bolts and nuts, it has no traditionally manufactured parts. Absolutely no soldering or breadboarding is required.

See more!


3055 06Each Friday is PiDay here at Adafruit! Be sure to check out our posts, tutorials and new Raspberry Pi related products. Adafruit has the largest and best selection of Raspberry Pi accessories and all the code & tutorials to get you up and running in no time!



Maker Business — “ORANGE PI: MEETING WITH STEVEN ZHAO IN SHENZHEN”

Wearables — Combine the fancy with the plain

Electronics — Protect your multimeter!

Biohacking — Flexible Skin Patch that Allows for Continuous Health Monitoring

Get the only spam-free daily newsletter about wearables, running a “maker business”, electronic tips and more! Subscribe at

AdafruitDaily.com

!



No comments yet.


Leave a comment

Adafruit has a “be excellent to each other” comment policy. Help us keep the community here positive and helpful. Stick to the topic, be respectful of makers of all ages and skill levels. Be kind, and don’t spam – Thank you!

from Adafruit Blog http://ift.tt/2wvbdkC

August 22, 2017

ESP8266 Based Internet Radio Receiver is Packed with Features

Filed under: Uncategorized — ketan @ 11:34 PM

ESP8266 Based Internet Radio Receiver is Packed with Features

Have a beautiful antique radio that’s beyond repair? This ESP8266 based Internet radio by [Edzelf] would be an excellent starting point to get it running again, as an alternative to a Raspberry-Pi based design. The basic premise is straightforward: an ESP8266 handles the connection to an Internet radio station of your choice, and a VS1053 codec module decodes the stream to produce an audio signal (which will require some form of amplification afterwards).

Besides the excellent documentation (PDF warning), where this firmware really shines is the sheer number of features that have been added. It includes a web interface that allows you to select an arbitrary station as well as cycle through presets, adjust volume, bass, and treble.

 

If you prefer physical controls, it supports buttons and dials. If you’re in the mood for something more Internet of Things, it can be controlled by the MQTT protocol as well. It even supports a color TFT screen by default, although this reduces the number of pins that can be used for button input.

The firmware also supports playing arbitrary .mp3 files hosted on a server. Given the low parts count and the wealth of options for controlling the device, we could see this device making its way into doorbells, practical jokes, and small museum exhibits.

To see it in action, check out the video below:

[Thanks JeeCee]

VIDEO

Posted in digital audio hacks, radio hacksTagged , ,

from Hack a Day http://ift.tt/2v2HSxY

Electric Longboard with All-New Everything

Filed under: Uncategorized — ketan @ 11:02 PM

Electric Longboard with All-New Everything

We love [lolomolo]’s Open Source electric longboard project. Why? Because he completely re-engineered everything while working on the project all through college. He tackled each challenge, be it electronic or mechanical as it came, and ended up making everything himself.

The 48″ x 13″ deck is a rather unique construction utilizing carbon fiber and Baltic birch. In testing the deck, [lolomol] found the deflection was less than an inch with 500 lbs. on the other end. He modified the Caliber II trucks to add four 2250W Turnigy Aerodrive brushless outrunners driving the wheels with the help of belts. The motors are controlled by VESC, an Open Source speed controller. There are a lot of fun details, like the A123 lithium cells equipped with custom battery management system PCBs.

The board sports 5W RGBW headlights that are so bright he can only run them at 10% PWM, plus RGB LED underlighting. All of it is controlled by an onboard Linux box. You can check out [lolomolo]’s GitHub repository for code, schematics, and CAD files. His Instructable for this project also has more design notes and thoughts.

If sweet longboards are your bag, check out the 3D-printed longboard and the long-distance electric longboard we published previously.

Posted in transportation hacksTagged , ,

from Hack a Day http://ift.tt/2v1txld

August 20, 2017

Find MoviePass Cinemas In Your Area With Your Zipcode

Filed under: Uncategorized — ketan @ 6:32 PM

Netflix has a neat service called MoviePass. For a $10/month subscription, you can watch one movie, everyday in a participating cinema. It’s basically a subscription service that gives you access to 30 movie tickets every month for $10. Watching movies can’t get cheaper than that. There’s one, very small catch with the service. It works for participating cinemas only. The good news is that most cinemas, close to 90% are participating cinemas but users will want to be sure there is at least one cinema nearby where the service is accepted before they subscribe. MoviePass Location is an online app that can find MoviePass cinemas in your area using your zipcode.

The official MoviePass app lets you find MoviePass cinemas in your area with your zipcode too but there’s catch; you have to sign up for the service first. The service offers a two week trial period but if you want to find participating cinemas without signing up for it, MoviePass Location is the way to go.

Find MoviePass Cinemas

MoviePass Location works with your zip code. The UI is just a map. Enter your zipcode and tap the Enter key.

The app will list all cinemas where MoviePass works complete with their address and location markers on the map.

A Word Of Caution About AMC Cinemas

MoviePass is a relatively new service and although it’s a Netflix backed service it still has its problems. The one noteworthy problem is that AMC cinemas aren’t happy about it. They’ve already threatened legal action against MoviePass. As of this writing, AMC cinemas still appear as participating cinemas and there’s no official statement that they’ve withdrawn from the service. That’s the extent of it but if you have the option to go to a different cinema, you might want to consider it.

There’s a chance that MoviePass Location might eventually be discontinued. A similar service launched two days ago and MoviePass revoked its API access effectively killing it. MoviePass purposefully restricts this information so that users are forced to sign up for the service before they can look at the list of participating cinemas. For a business model, this makes some sense. It’s the foot-in-the-doorway sales pitch. If a user is signed up, the service can push harder to get them to subscribe and there’s a better chance that people will subscribe long term. Of course, it does make users skeptical especially considering the $10/month price tag seems unsustainable long-term if not too good to be true.

MoviePass currently only operates in the US.

Read Find MoviePass Cinemas In Your Area With Your Zipcode by Fatima Wahab on AddictiveTips – Tech tips to make you smarter

from AddictiveTips http://ift.tt/2v4IVcq

Older Posts »

Create a free website or blog at WordPress.com.