• Sign in to follow this  
    Followers 0

    The James Norton Column: MoDaCo reviews and battery tests


    We here at MoDaCo do plenty of device reviews. We don't tend to bring too much science into it, preferring to give an impression of our views which are always informed by the other devices each individual reviewer has used.

    Whilst other publications try to offer a slightly colder and more clinical analysis of a device, we want to bring our own personalities and feelings about products into the mix.

    The way we do things here at MoDaCo leads to a more impassioned end result but can have a lower level of accuracy in a technical sense. For instance, we give an impression of how we like the screen on a new device without bringing out a colorimeter to test its accuracy. You probably have a preference for the type of detail and accuracy you want to read about in product reviews, but hopefully see the benefit of all the different approaches taken across the various sources online.

    Nonetheless, we have decided to bring a small amount of science to our otherwise largely subjective reviews. This is not going to be easy, none of us here at MoDaCo are scientists, nor do we have the vast resources required to take proper measurements of all aspects of the devices we review.

    The first component of devices we are looking to measure is battery. Our intention is to run a battery test on each device that gives an indication of how that device compares to all the others we have tested. We are not looking to make this test perfect - that is an impossible task - but instead to give a reasonably accurate prediction of the differences between devices.

    The current plan is for our battery test to have two components which we will execute on all devices. Firstly we will test the device for video playback efficiency. We will loop a standard video on the device until its previously fully charged battery reaches 10% and record the time it lasted.

    Secondly we will run a web browser test which will run through a series of standard scripts developed by our in-house JavaScript expert (that's me by the way) which will run until the battery reaches 10% again.

    Each of these two tests will be run at two different screen brightness levels. Firstly we will get the screen calibrated to as close to 200 lux brightness as possible. Secondly, we will set the brightness to maximum and run the whole test again. Once completed, we will take an average of the two runs to produce an endurance number.

    It is most important to note that the results we get will not stand up on their own. If we suggest that an HTC One M8 can play video for 10 hours, that does not mean you will get the same result. However, if we suggest that number for the M8 and then say that a Sony Xperia Z2 can play video for 12 hours, you too should expect approximately 20% longer video playback time between the devices.

    Within the next few months we will have our new battery testing tools prepared and in use. In the meantime, we want to know your thoughts on this. Please let us know what you think in the comments below.

    Are we choosing a reasonable methodology and if not, what would you prefer to see?

    Are tests like this useful to you?


    1
    Sign in to follow this  
    Followers 0


    User Feedback


    Posted

    I'm all for some of the "scientific" tests, such as those you have mentioned.  I also like to see what the reviewer personally thinks of the device or a specific feature, as long as they can explain their reasons for their personal take on it.

     

    So a combination of objective and subjective observations can give a good idea of what a device is like to live with.

     

    I've seen many reviews of devices on other sites where one of the "cons" is a specific design feature of the device being reviewed, and that feature is there for those that want it.  e.g. 12" tablets as often seen as too big.  Fine, if you don't like the size then say so, but to say that a con of a 12" tablet is the size when there are 10" and 8" versions in the same family is not reasonable in my books (OK - rant over, sorry!)

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    I think this a sensible approach but do not forget the radio state in your tests.  Depending on location, the mast search intervals and carrier proximities all contribute significantly to battery drain.  So do we test in airplane mode for a fairer comparison?  Or do we ensure that devices are connected to the same carrier for every test?

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    Record a set of specific actions based on real life (I'm sure Paul can generate some tracking software).

    • screen on
    • open twitter
    • open web page
    • map search
    • wifi on

    Then play those actions back over a period of time using something akin to monkey runner.

     

    Then you get "real life" comparison tests

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    I think this a sensible approach but do not forget the radio state in your tests.  Depending on location, the mast search intervals and carrier proximities all contribute significantly to battery drain.  So do we test in airplane mode for a fairer comparison?  Or do we ensure that devices are connected to the same carrier for every test?

    What would you think the most sensible approach is? Airplane mode? My view is that having no SIM installed but WiFi enabled with background stuff syncing would be ok. But it is hard to produce a baseline here as different people review the devices in different locations with different setups.

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    Precisely, so what's the point of a single metric, it also needs comparison to have any meaning. My own feeling is some sort of balanced scorecard would be better.

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    Precisely, so what's the point of a single metric, it also needs comparison to have any meaning. My own feeling is some sort of balanced scorecard would be better.

    Can you give an example of what you mean?

    0

    Share this comment


    Link to comment
    Share on other sites

    Posted

    I actually like the techcrunch approach where they score each aspect of a phone, albeit a bit unscientifically. Taking this further I would ask myself/the modaco community what are the killer features of a smartphone, for myself I would immediately chirp up: camera, GPS, music centre. Having got the list I would then say what is the minimum acceptable = pass, and what gives added value in each area = higher score. These need a tabulated format and overall score. I would then list all important features not considered killer, or susceptible to scoring. Voila. Of course always remembering there is no definitive way of comparing or determining which is the best smartphone, but it provides a readily accessible framework on which individuals can then bring their own preferences and judgements more easily to bear.
    0

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!


    Register a new account

    Sign in

    Already have an account? Sign in here.


    Sign In Now

MoDaCo is part of the MoDaCo.network, © Paul O'Brien 2002-2015. MoDaCo uses IntelliTxt technology.