Jump to content

Froyo on Liquid


Guest Charlton22

Recommended Posts

Guest phhusson
Multi Touch is working!!!

What a nice work that your doing guys!!! Keep on going! Well done!

Try to do the same thing as

on an eclair build :(
Link to comment
Share on other sites

Guest phhusson
can you say when you will release an working version of the fryo rom?

looks like its fully working

Not fully, if the video is only 17s-long, it's not without any reason :(

Basically, whenever the two points get the same x or y, you have 50/50 chances to get the correct point :( (In my tests it's actually better than 50/50, but I can't see why)

Link to comment
Share on other sites

Guest Umka
Seriously. They should start a smartphone manufacturing company.

Hey, lets all chip in and become shareholders with our devs as top-developers! Modaco inc. Sounds nice!

Link to comment
Share on other sites

Guest Rajit
Hey, lets all chip in and become shareholders with our devs as top-developers! Modaco inc. Sounds nice!

I second your motion. And imho, "Liquid Ink" also sounds nice..

Link to comment
Share on other sites

Guest akira.pwr
I'm pretty sure! Our devs made huge improvements....

GO ---- Malez ---- Vache ---- PHH ---- We Support You! :-)

Quoteeee!!!

Link to comment
Share on other sites

Guest Rajit
As posted on my twitter, 3D is now working, launcher2 and live wallpapers.

About responsivness i would say it is really smooth and reponsive without any kind of optimisation. Froyo is full of feature with the feel of a light rom.Faster than eclair, for sure, donut maybe

If you want to the flash working on liquid

Malez, could you guys consider porting the stream launcher to ure 2.2 rom??? it looks really nice, and it would awesome if we could run that..

Link to comment
Share on other sites

Guest D.Rosendahl
Not fully, if the video is only 17s-long, it's not without any reason :(

Basically, whenever the two points get the same x or y, you have 50/50 chances to get the correct point :( (In my tests it's actually better than 50/50, but I can't see why)

Actually, since phhusson have done the hard work finding out where it is suitable to implement a tracking filter, most cases can be handled without problems at all.

The only situation I can see a problem is when both fingers are pressed down SIMULTANEOUS with same x or y. (shouldn't happen often), otherwise it quite simple to track which x and y belongs to which finger by storing older values. Ever since I first saw the multitouch test on nexus one I have wondered why this haven't been implemented by google yet, maybe there are some response time reasons?

(The whole thing could probably be improved even more with some kind of low-pass-filtering to remove some flickering, depends on how reliable and fast(frequency) the signals from the hardware is)

Sorry to say, I don't have the software and knowledge (about Android and compiling kernels) for the moment to do something about it, but any developer is welcome to contact me for details if they want.

Link to comment
Share on other sites

Guest Quipeace
Actually, since phhusson have done the hard work finding out where it is suitable to implement a tracking filter, most cases can be handled without problems at all.

The only situation I can see a problem is when both fingers are pressed down SIMULTANEOUS with same x or y. (shouldn't happen often), otherwise it quite simple to track which x and y belongs to which finger by storing older values. Ever since I first saw the multitouch test on nexus one I have wondered why this haven't been implemented by google yet, maybe there are some response time reasons?

(The whole thing could probably be improved even more with some kind of low-pass-filtering to remove some flickering, depends on how reliable and fast(frequency) the signals from the hardware is)

Sorry to say, I don't have the software and knowledge (about Android and compiling kernels) for the moment to do something about it, but any developer is welcome to contact me for details if they want.

I send phhuson a pm yesterday on the subject, but haven't recieved a response yet, pretty much the same as you posted here.

Never been this close to the hardware, so not sure if it's even possible to do something like this at such a low level.

Link to comment
Share on other sites

Guest phhusson
Actually, since phhusson have done the hard work finding out where it is suitable to implement a tracking filter, most cases can be handled without problems at all.

The only situation I can see a problem is when both fingers are pressed down SIMULTANEOUS with same x or y. (shouldn't happen often), otherwise it quite simple to track which x and y belongs to which finger by storing older values. Ever since I first saw the multitouch test on nexus one I have wondered why this haven't been implemented by google yet, maybe there are some response time reasons?

(The whole thing could probably be improved even more with some kind of low-pass-filtering to remove some flickering, depends on how reliable and fast(frequency) the signals from the hardware is)

Sorry to say, I don't have the software and knowledge (about Android and compiling kernels) for the moment to do something about it, but any developer is welcome to contact me for details if they want.

I thought the same way, until I realizer a little something. But first, let's explain the problem.

On (all?) capacitive screens, the actual measurement hardware, knows only the x and the y values that are touched.

If you have two points at X=(x1,y1) and Y=(x2,y2), the system will tell you x={x1,x2}, y={y1, y2}.

Then, you have to know which point is were. For one point, you have for possibilities A=(x1, y1), B=(x2, y1),C= (x1, y2),D= (x2, y2).

So, yes, I search for both fingers, which point is the nearest of the one before.

But when at some point x1=x2, at the next step, you won't be able to know who is who. Let's say X goes 10 pixels with dx>0 and Y goes 10pixels with dx<0, the distance between X and A is equal to the distance to X and B, and so you can't do anything.

One possibility to work that around, is to measure the speed of the moving point, and say that the speed is somehow continuous.

But as you might have noticed, when two points have axis with less than ~ 50pixels difference, the controler say they have the same axis.

So you have a huge dead area where you can't measure speed of individual points.

Link to comment
Share on other sites

Guest apecross
I thought the same way, until I realizer a little something. But first, let's explain the problem.

On (all?) capacitive screens, the actual measurement hardware, knows only the x and the y values that are touched.

If you have two points at X=(x1,y1) and Y=(x2,y2), the system will tell you x={x1,x2}, y={y1, y2}.

Then, you have to know which point is were. For one point, you have for possibilities A=(x1, y1), B=(x2, y1),C= (x1, y2),D= (x2, y2).

So, yes, I search for both fingers, which point is the nearest of the one before.

But when at some point x1=x2, at the next step, you won't be able to know who is who. Let's say X goes 10 pixels with dx>0 and Y goes 10pixels with dx<0, the distance between X and A is equal to the distance to X and B, and so you can't do anything.

One possibility to work that around, is to measure the speed of the moving point, and say that the speed is somehow continuous.

But as you might have noticed, when two points have axis with less than ~ 50pixels difference, the controler say they have the same axis.

So you have a huge dead area where you can't measure speed of individual points.

Have you ever noticed that coordinates are wrong only when the second finger is on the upper right comparing the first finger?this can help for me...

Link to comment
Share on other sites

Guest phhusson
Have you ever noticed that coordinates are wrong only when the second finger is on the upper right comparing the first finger?this can help for me...

Check my touchscreen workaround video.

Link to comment
Share on other sites

Guest bais

What you've already done with MT is great and I think that's enough. The inversed thing was the biggest problem and you solved it, so I think spending any more time on MT isn't necessary, the issue you described us seems minor.

Edited by bais
Link to comment
Share on other sites

Guest apecross

I Have seen it yet but i was trying to say that if the inversion happen With always the same conditions, you can check if the finger are in that position, to invert axys so you will have right position...

Edited by apecross
Link to comment
Share on other sites

Guest Quipeace
I thought the same way, until I realizer a little something. But first, let's explain the problem.

On (all?) capacitive screens, the actual measurement hardware, knows only the x and the y values that are touched.

If you have two points at X=(x1,y1) and Y=(x2,y2), the system will tell you x={x1,x2}, y={y1, y2}.

Then, you have to know which point is were. For one point, you have for possibilities A=(x1, y1), B=(x2, y1),C= (x1, y2),D= (x2, y2).

So, yes, I search for both fingers, which point is the nearest of the one before.

But when at some point x1=x2, at the next step, you won't be able to know who is who. Let's say X goes 10 pixels with dx>0 and Y goes 10pixels with dx<0, the distance between X and A is equal to the distance to X and B, and so you can't do anything.

One possibility to work that around, is to measure the speed of the moving point, and say that the speed is somehow continuous.

But as you might have noticed, when two points have axis with less than ~ 50pixels difference, the controler say they have the same axis.

So you have a huge dead area where you can't measure speed of individual points.

I'm assuming that until the axis get too close it all goes right, if thats where I go wrong please tell me to gtfo and stfu XD

Heres a small piece of pseudocode, far from complete.

-cut- // see below

So what's happening, we first have a normal sequence, the first value from rawCoords belongs to the first finger, x. And so on.

We can keep on using that sequence, as the points aren't messed up yet.

Now the either the x or y coordinates are the same and the hardware doesn't know which x belongs to which y coordinate, we're waiting for the coordinates to change.

Now lets suppose either the x and y coordinates have changed so that they are no longer equal.

Luckily we still have the old coordinates from before things went wrong.

We figure out which touch point differs the least from each old coordinate, and store i in the sequence (in the right order) so we know what rawCoordinate should be copied to which final coordinate.

That sequence will then stay the same untill the coordinates become equal again.

Do you think that will work? I hate not being able to test it my self, really have no idea where to start XD

PS: I know that the formatting is off, I hope it's still somewhat readable.

Edited by Quipeace
Link to comment
Share on other sites

Guest phhusson
I'm assuming that until the axis get too close it all goes right, if thats where I go wrong please tell me to gtfo and stfu XD

Heres a small piece of pseudocode, far from complete.

1.Your code is totally wrong, and does stupid stuff

2.It seems you haven't even tried to read my code, since that's the idea I use. (but it works ...)

int sequence[4] = {0, 1, 2, 3}; // sequence is the order in wich the raw touch points should be copied

// this sequence will be modified when things get messed up.

First, about this "sequence stuff".

Your code makes that it is possible that a X value, becomes an Y value, and vice versa, sequence[0]=1 or 3, is possible, but meaningless!

Also, sequence[0]=sequence[2] is possible, and also meaningless.

For both of these reasons, I chose to describe the possible points, with a 2-bit int: first bit represents the correct x amongst {x1,x2}, for the first point, the second bit represents the correct y amongst {y1,y2} (it might be the opposite for x/y in the actual code). The second point values are NOT first bit, and NOT second bit. So that makes the code safe to any weird stuff.

// repeat for each old coordinate ([0][0] = 0, [0][1] = 1, [1][0] = 2 ...

// should be in a loop, i know

Ok, I won't say it then.

curDifference = oldCoords[0][1] - rawCoords;

rawCoords=(800, 600) is the best match for any point according to your algorithm.

So what's happening, we first have a normal sequence, the first value from rawCoords belongs to the first finger, x. And so on.

We can keep on using that sequence, as the points aren't messed up yet.

Now the either the x or y coordinates are the same and the hardware doesn't know which x belongs to which y coordinate, we're waiting for the coordinates to change.

Now lets suppose either the x and y coordinates have changed so that they are no longer equal.

Luckily we still have the old coordinates from before things went wrong.

We figure out which touch point differs the least from each old coordinate, and store i in the sequence (in the right order) so we know what rawCoordinate should be copied to which final coordinate.

Try your algorithm in mind, when doing one-way cross. Your algorithm will do the exact opposite of what's needed.

If you code it the other way, then make your finger cross in one way, then go in the opposite way, it will do the exact opposite of what's needed again.

PS: I'm tired, so some sentences might be meaningless, sorry about that.

Link to comment
Share on other sites

Guest Quipeace

The code isn't meant to do anything, just a very rough (and very quick) sketch of what I was thinking. Definitely not used to programming without running it after a change or two :(

2.It seems you haven't even tried to read my code, since that's the idea I use. (but it works ...)

I did, but didn't fully understand (not familiar with many of the operators as I never had to use them before). By the way, why do defines have the same color code as comments on gitorious o.0'

Anyway, not gonna work, or at least not within a year or two. Sorrey XD.

EDIT:

Epic fail, just ran a very quick demo on the NDS with dummy values, leaving out the Y coordinates, and it did indeed randomly change direction -.-. Back to the lazy libraries again :(

Suppose I'm just going to quote Bias:

What you've already done with MT is great and I think that's enough. The inversed thing was the biggest problem and you solved it, so I think spending any more time on MT isn't necessary, the issue you described us seems minor.
Edited by Quipeace
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.