Jump to content

adreno 200 optimisation tips and tricks


Guest l2azor

Recommended Posts

Guest t0mm13b

Interesting, but nothing in there that would help create a smooth ROM, in fact, a lot of it is more in the direction of gaming and graphics.... interesting linky though - ty :)

Link to comment
Share on other sites

Guest deksman2

A 'smooth' ROM can be gained via plenty of optimizations that are already done... but if you wanted to take this further up, then the UI needs to be completely rendered via the GPU... as does the browser, pinching, zooming, etc...

However... Gingerbread seemingly supports partial gpu acceleration of it's UI... but not entirely.

Ice Cream sandwich on the other hand (or Android 4.0) should have the entire UI hardware accelerated as far as I know... though google mentioned some kind of garbage about low end phones not having sufficient hardware capability for this which is why ICS won't have an official release for many devices... which in my opinion is a load of trash.

Blade's hardware is enough to run the UI in full hardware acceleration... the gpu is certainly capable of producing well over 30FPS for UI drawing so there would be no lag.

Edited by deksman2
Link to comment
Share on other sites

Guest t0mm13b

A 'smooth' ROM can be gained via plenty of optimizations that are already done... but if you wanted to take this further up, then the UI needs to be completely rendered via the GPU... as does the browser, pinching, zooming, etc...

However... Gingerbread seemingly supports partial gpu acceleration of it's UI... but not entirely.

Ice Cream sandwich on the other hand (or Android 4.0) should have the entire UI hardware accelerated as far as I know... though google mentioned some kind of garbage about low end phones not having sufficient hardware capability for this which is why ICS won't have an official release for many devices... which in my opinion is a load of trash.

Blade's hardware is enough to run the UI in full hardware acceleration... the gpu is certainly capable of producing well over 30FPS for UI drawing so there would be no lag.

Uhmmm... I believe I mentioned this in another thread about optimizations. And also here, my opinion on this as well.

Using Adreno is not the magic bullet for UI drawing... read the postings and you'll understand why :)

Edited by t0mm13b
Link to comment
Share on other sites

Guest deksman2

Uhmmm... I believe I mentioned this in another thread about optimizations. And also here, my opinion on this as well.

Using Adreno is not the magic bullet for UI drawing... read the postings and you'll understand why :)

While I don't think that using the gpu is the magic bullet for UI drawing, it IS preferable.

Why?

Because you move away from taxing the CPU from doing things it doesn't have to, and the gpu would be more efficient for those tasks because that's what it was designed to do in the first place.

It can also do this at a lower power level than the cpu (which is left to do other, more 'important' things).

Also, the CPU (despite various optimization) can easily choke while viewing graphic heavy documents or websites (discernible lag)... whereas the gpu (if properly implemented) would not.

The iPhone is a clear example of this (the 3G hardware is essentially identical to the Blade).

Edited by deksman2
Link to comment
Share on other sites

Guest t0mm13b

While I don't think that using the gpu is the magic bullet for UI drawing, it IS preferable.

Why?

Because you move away from taxing the CPU from doing things it doesn't have to, and the gpu would be more efficient for those tasks because that's what it was designed to do in the first place.

It can also do this at a lower power level than the cpu (which is left to do other, more 'important' things).

Also, the CPU (despite various optimization) can easily choke while viewing graphic heavy documents or websites (discernible lag)... whereas the gpu (if properly implemented) would not.

The iPhone is a clear example of this (the 3G hardware is essentially identical to the Blade).

Sure its preferrable, but, and this is the big BUT, that would reduce portability to another device if you write code that uses the GPU to do all the 2D drawing! That would effectively mean, having a lot of native code for ONE GPU chipset, what about another chipset used in another handset as well.... it goes on and on... and there you have a nice terminology - defragmentation....

Why do you think the majority of the android framework is written in Java? :rolleyes:

For portability to ease the transition of Android to a new device... that's why...

Sure JNI callbacks to the C native runtime to deal with that, but to be honest, JNI can be a chokepoint, shuffling from pure Java to native, and also, a lot of the java code is not really that optimized...

I digress here and not getting into a big meaningless discussion with what the dalvikvm, runtime, apps running, kernel etc...

Edited by t0mm13b
Link to comment
Share on other sites

Guest Hoonboof

Sure its preferrable, but, and this is the big BUT, that would reduce portability to another device if you write code that uses the GPU to do all the 2D drawing! That would effectively mean, having a lot of native code for ONE GPU chipset, what about another chipset used in another handset as well.... it goes on and on... and there you have a nice terminology - defragmentation....

Why do you think the majority of the android framework is written in Java? :rolleyes:

For portability to ease the transition of Android to a new device... that's why...

Sure JNI callbacks to the C native runtime to deal with that, but to be honest, JNI can be a chokepoint, shuffling from pure Java to native, and also, a lot of the java code is not really that optimized...

I digress here and not getting into a big meaningless discussion with what the dalvikvm, runtime, apps running, kernel etc...

Its fragmentation not defragmentation, rewriting pixel/surface flinger in egl is doable, the reason it was never did was because Google felt that rewriting code to stop garbage collection happening so often would bring bigger gains (and on a lot of hardware this is very true, HTC hero a good example, software rasterisation of a lot of ops is faster).

Why are you trying to talk down native code? If -anything- is going to get around garbage collection it's JNI.

Good algorithms are better than optimisation, that's what jit is for!

Link to comment
Share on other sites

Guest deksman2

Ok... when it comes to fully HW accelerated UI... would it then be possible that the manufacturers of each phone are left to implement it?

I mean, Android devs could simply allow for the option, but it would be up to the manufacturers to implement it for each device.

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.