Jump to content

GPU Overclock


Guest Genrix

Recommended Posts

I read the XDA forum and found something interesting ...

It`s work on stock ROM.

In the console device write:

----

"su"

"mount -t debugfs debugfs /sys/kernel/debug"

----
This mount subsystem debugging readable. I started to learn what interesting things it can give us, and this is what came. Next, go to the mounted directory or via symlink ./d Go to the directory: clk/grp_3d_src_clk Now we can see the files. Open as text file "list_rates". We see some digits. This is the frequency of the GPU. Remember the last two lines of numbers. Now open the file "rate". We see digit 192 000 000 is the current frequency of the GPU. Now we are in a low load. GPU is not very loaded. Now open the drivers in source code. /kernel/arch/arm/mach-msm/board-msm7x30.c And do search with keyword "kgsl" and found this:
-----------------------------

	.max_grp2d_freq = 0,

	.min_grp2d_freq = 0,

	.set_grp2d_async = NULL, /* HW workaround, run Z180 SYNC @ 192 MHZ */

	.max_grp3d_freq = 245760000,

	.min_grp3d_freq = 192 * 1000*1000,

	.set_grp3d_async = set_grp3d_async,

	.imem_clk_name = "imem_clk",

	.grp3d_clk_name = "grp_clk",

	.grp3d_pclk_name = "grp_pclk",

-----------------------------
max_grp3d_freq and min_grp3d_freq it`s GPU freq. For low(idle) load and high load GPU. Next open the drivers in source code. /kernel/arch/arm/mach-msm/clock-7x30.c Do search with keyword "grp" and we found:
----------------------------------------------

static struct clk_freq_tbl clk_tbl_grp[] = {

	F_BASIC( 24576000, LPXO,  1, NOMINAL),

	F_BASIC( 46080000, PLL3, 16, NOMINAL),

	F_BASIC( 49152000, PLL3, 15, NOMINAL),

	F_BASIC( 52662875, PLL3, 14, NOMINAL),

	F_BASIC( 56713846, PLL3, 13, NOMINAL),

	F_BASIC( 61440000, PLL3, 12, NOMINAL),

	F_BASIC( 67025454, PLL3, 11, NOMINAL),

	F_BASIC( 73728000, PLL3, 10, NOMINAL),

	F_BASIC( 81920000, PLL3,  9, NOMINAL),

	F_BASIC( 92160000, PLL3,  8, NOMINAL),

	F_BASIC(105325714, PLL3,  7, NOMINAL),

	F_BASIC(122880000, PLL3,  6, NOMINAL),

	F_BASIC(147456000, PLL3,  5, NOMINAL),

	F_BASIC(184320000, PLL3,  4, NOMINAL),

	F_BASIC(192000000, PLL1,  4, NOMINAL),

	F_BASIC(245760000, PLL3,  3, HIGH),

	/* Sync to AXI. Hence this "rate" is not fixed. */

	F_RAW(1, SRC_AXI, 0, BIT(14), 0, 0, NOMINAL, NULL),

	F_END,

----------------------------------------------
245760000 it`s freq, PLL3 - clock gen with some fixed freq, 3 - frequency divider, HIGH - voltage level preset, because if you do a search with "HIGH" then you will find in this code of driver:
-------------------------------

	static const int mv[NUM_SYS_VDD_LEVELS] = {

		[NONE...LOW] = 1000,

		[NOMINAL] = 1100,

		[HIGH]	= 1200,

-------------------------------
I learned all this and thought it was possible to overclock the GPU. I have checked whether the GPU frequency switches. Remember, we do a mount of debugfs? I ran some 3d test and looked through the ADB which value at this time appear on a file clk/grp_3d_src_clk/rate - now he is 245 760 000. GPU do scale to up freq, because GPU get high 3d graphics load. I think that it is possible to write a new line or replace the last line in the driver clock-7x30.c on an over 300mhz, and specify the appropriate limit of the frequency in the driver board-msm7x30.c. For example for clock-7x30.c: if freq - 73 728 000 \ 10, then pll \ 1 = 737 280 000 - full freq. Try ti find next step of freq: 737 280 000 \2 = 368640000 Then we must write line of next freq step - F_BASIC(368640000, PLL3, 2, HIGH). May be we can do work GPU with undervolt on 192mhz. For do this write NONE...LOW in line of 192mhz. But I don`t shure able to work of GPU at this low a volt level. Then we must set upper limit scale of GPU in board-msm7x30.c:
	.max_grp3d_freq = 368640000, /* old freq 245760000, */

	.min_grp3d_freq = 192 * 1000*1000,

Maybe we should use another PLL, if possible, and select the frequency of smaller, such as 300MHz. But with PLL3 we can use only "2" divider and get only 369mhz.

I did not good understand the source code of the kernel and I attained stupid mistakes when trying to overclock, so please, Technolover, learn my way. Maybe this will work. I have only one device, and a pity to kill him, although the risks are very small.

Maybe someone will agree to take a risk and have volunteered to ty a test kernels, which will build Technolover.

I would be very happy if we have to overclock the GPU. :)

ps. In addition to overclocking of the GPU, in the driver board-msm7x30.c we can be the so-same to allocate more of memory for GPU. By default, defined 30MB. For the Iconia Smart set to 40MB. I think we are not strongly was upset, if the sacrifice for the of the GPU yet 20mb RAM of system memory.

But we need to check whether it will take the whole of the GPU memory, or somewhere else as he pointed out the selection.

And we can do large of framebuffer. For this need lost 5-10MB.

Reduced overall memory of 30MB, I think, in our device does not give us some negative. Bit this may be speed up our games.

Edited by Genrix
Link to comment
Share on other sites

Guest TechnoLover

davidevinavil told me that you got that (great) idea of that but I don't know if we can risk such an option to all these different devices of Liquid MT...

~369MHz are an overclock of ~50%, so it's a big deal and 'cause we have no temperature of cpu/gpu it's a bit risky but maybe worthy...

Maybe we should give it a try 'cause, Desire Z, Desire HD, SGS2, etc. have also gpu oc ;)

Link to comment
Share on other sites

The temperature of chips and processors generally selected by voltage level.

Usually burn chips and processors are not the frequency, burn them with high voltage level = overheating.

If we only increase the frequency and the increase is greater, than the chip is not able to operate stably, it just raises an error and hang. Do reboot and flash other kernel.

CPU overclock to 100% and set voltage level to up 10% it`s for us is normally, but the GPU overclock at 50% without increasing a voltage , it is dangerous? :)

So I think that the risk is minimal, although yes, there is have a risk.

Edited by Genrix
Link to comment
Share on other sites

Guest TechnoLover
if freq - 73 728 000 \ 10, then pll \ 1 = 737 280 000 - full freq.

How did you get there? Explain it a bit please (:

btw. LOW works on 192000000 MHz for me ;)

Link to comment
Share on other sites

Guest TechnoLover

 So I just added the new line in the files, booted up and ran some benchmarks, in shell I could see that it goes up to 368640000 but nothing changed. FPS, Scores, etc. everythings the same. 

I write an E-Mail to Acer Germany and ask why they capped the framerate at 30fps...

Edited by TechnoLover
Link to comment
Share on other sites

Look at /kernel/arch/arm/mach-msm/clock-7x30.c

Lines of clk tbl.

F_BASIC( 73728000, PLL3, 10, NOMINAL),

and

F_BASIC(245760000, PLL3, 3, HIGH),

245760000 it`s freq, PLL3 - № clock gen with some fixed freq, 3 - frequency divider PLL. HIGH it`s voltage level.

If we get 245mhz with divider 3, then full freq PLL = ~245 * 3 = ~735mhz.

Or we use other clk freq from tbl - 73728000 * 10 = 737 280 000.

For do overclocks we must use next freq step - apply divider "2", then 737 280 000 / 2 = 368640000 hz.

And we get this settings F_BASIC(368640000, PLL3, 2, HIGH). Have nothing difficult to understand.

I do write this tread because I can`t grep in source any alias\link for setup GPU clocks. I don`t now how do it.

Maybe something else do manage clocks of GPU.

In KGSL source drivers present set a ebi1_clk and in debugFS he is have a freq too.

But I can`t search any direct link in source between ebi1_clk and clk_tbl_grp.

I need have more experience in work with kernel source.

Edited by Genrix
Link to comment
Share on other sites

So, all settings about KGSL (GPU) list at /master/kernel/include/linux/msm_kgsl.h

Let`s see it:

Line 127:


struct kgsl_platform_data {

unsigned int high_axi_2d;

unsigned int high_axi_3d;

unsigned int max_grp2d_freq;

unsigned int min_grp2d_freq;

int (*set_grp2d_async)(void);

unsigned int max_grp3d_freq;

unsigned int min_grp3d_freq;

int (*set_grp3d_async)(void);

const char *imem_clk_name;

const char *imem_pclk_name;

const char *grp3d_clk_name;

const char *grp3d_pclk_name;

const char *grp2d0_clk_name;

const char *grp2d0_pclk_name;

const char *grp2d1_clk_name;

const char *grp2d1_pclk_name;

unsigned int idle_timeout_2d;

unsigned int idle_timeout_3d;

struct msm_bus_scale_pdata *grp3d_bus_scale_table;

struct msm_bus_scale_pdata *grp2d0_bus_scale_table;

struct msm_bus_scale_pdata *grp2d1_bus_scale_table;

unsigned int nap_allowed;

unsigned int pt_va_size;

unsigned int pt_max_count;

bool pwrrail_first;

};

Then need do grep a source for learn how use this settings on any drivers.

Edited by Genrix
Link to comment
Share on other sites

Guest almighty15

I thought because these are Soc that if you overclock the CPU it overclocks the GPU as well?

I would be interested in testing these but in say 10% speed intervals instead of jumping right in at a 50% overclock.

Would be interesting to see what other devices have there 205 clocked to..

Link to comment
Share on other sites

In XDA forum have a tread about OC GPU adreno. I read tread and found somethings

http://forum.xda-dev...2&postcount=170

http://forum.xda-dev...6&postcount=202

http://forum.xda-dev...3&postcount=211

http://forum.xda-dev...3&postcount=217

http://forum.xda-dev...3&postcount=240

!!!!! http://forum.xda-dev...1&postcount=257

http://forum.xda-dev...2&postcount=262

He say - remember how you do overclock pentium2?

:D Pentium2 we do OC with set upper level of FSB.

So.....need set upper level of AXI? 0_o Hmmmm......

http://forum.xda-dev...3&postcount=267

Tread end with: GPU not possible do OC independent.

Ex., clock GPU = clock CPU / 7,5.

Its hardware divider and unreal manage him by SW.

But.....you can see, what for gen gpu clock in driver use different PLL for 192 and 245mhz. PLL1 and PLL3. Why? :)

Look this post http://forum.xda-dev...3&postcount=217

-> "So, let's say we have a tree like the one on the left:"

Nary_to_binary_tree_conversion.png

I think I understand, it is necessary to increase the global PLL frequency (А), but then have to shift many of the secondary frequency dividers, PLL. Our GPU clock place ex. at G or E on this picture.

If we not change a secondary dividers, then we get OC all devices and some devices not able work with new high freq. Need drop down freq for him by change all divider secondary(slave) PLL..... it`s a hard work. 0_o

Global PLL frequency (А) = AXI freq? Hmm...

Edited by Genrix
Link to comment
Share on other sites

Guest TechnoLover

Look at /kernel/arch/arm/mach-msm/clock-7x30.c

Lines of clk tbl.

F_BASIC( 73728000, PLL3, 10, NOMINAL),

and

F_BASIC(245760000, PLL3, 3, HIGH),

245760000 it`s freq, PLL3 - № clock gen with some fixed freq, 3 - frequency divider PLL. HIGH it`s voltage level.

If we get 245mhz with divider 3, then full freq PLL = ~245 * 3 = ~735mhz.

Or we use other clk freq from tbl - 73728000 * 10 = 737 280 000.

For do overclocks we must use next freq step - apply divider "2", then 737 280 000 / 2 = 368640000 hz.

And we get this settings F_BASIC(368640000, PLL3, 2, HIGH). Have nothing difficult to understand.

Alright I understood everything and controlled my code and found one mistake, so I rebuilt my code and had a gain of 20% in RD 3D Benchmark and in AnTuTu I got no gain, so it's just worthy in some cases I think.

CPU @ 1GHz and GPU @ 368MHz => 24fps

CPU @ 1.6GHz & GPU @ 368MHz => 25fps

so CPU clock, doesn't change much =/

In AnTuTu CPU OC ends in 8% more 3D Performance

Edited by TechnoLover
Link to comment
Share on other sites

TechnoLover

Reread my post #11.

/kernel/arch/arm/mach-msm/clock-7x30.c

/* MUX source input identifiers. */

#define SRC_SEL_PLL0 4 /* Modem PLL */

#define SRC_SEL_PLL1 1 /* Global PLL */

#define SRC_SEL_PLL3 3 /* Multimedia/Peripheral PLL or Backup PLL1 */

#define SRC_SEL_PLL4 2 /* Display PLL */

#define SRC_SEL_LPXO_SDAC 5 /* Low-power XO for SDAC */

#define SRC_SEL_LPXO 6 /* Low-power XO */

#define SRC_SEL_TCXO 0 /* TCXO */

#define SRC_SEL_AXI 0 /* Used for rates that sync to AXI */

May be try this:

Stock code:

        F_BASIC(192000000, PLL1,  4, NOMINAL),

        F_BASIC(245760000, PLL3,  3, HIGH),
192 000 000 * 4 = 768000000 768000000 / 2 = 384000000 So, If we use PLL1 and set divider of "2" = 384000000 Change to:
        F_BASIC(192000000, PLL1,  4, NOMINAL),

        F_BASIC(384000000, PLL1,  2, HIGH),
Of course need change and /kernel/arch/arm/mach-msm/board-msm7x30.c .max_grp3d_freq = 384000000, Or use more safe mod:
        F_BASIC(192000000, PLL1,  4, NOMINAL),

        F_BASIC(256000000, PLL1,  3, HIGH),

But OC is small....may be we can`t see any different of fps.

Edited by Genrix
Link to comment
Share on other sites

Guest TechnoLover

I think you misunderstood ;)

It already works with:

	F_BASIC(192000000, PLL1,  4, LOW),

	F_BASIC(245760000, PLL3,  3, HIGH),

	F_BASIC(368640000, PLL3,  2, HIGH),

;)

Link to comment
Share on other sites

In some cases software may not request more power of GPU, may be AnTuTu have some bugs.

Then I think need remove 245mhz and stay only 192 and 368 or 380.

May be try do more OC and use PLL1?

        F_BASIC(192000000, PLL1,  4, LOW),

        F_BASIC(384000000, PLL1,  2, HIGH),

If you're afraid to break the device, then David can test the high frequency. :)

We can wait him.

Edited by Genrix
Link to comment
Share on other sites

Guest TechnoLover

I checked it over the shell... the rate increased to 368640000Hz.

Now I test 384000000Hz ;) but not a real improvement...

Global PLL frequency (А) = AXI freq? Hmm...

He just used the alphabet ;)

Edited by TechnoLover
Link to comment
Share on other sites

What is this?

/kernel/drivers/gpu/msm/kgsl_yamato.c

device->pwrctrl.clk_freq[KGSL_AXI_HIGH] = pdata->high_axi_3d;

	/* per test, io_fraction default value is set to 33% for best

       power/performance result */

	device->pwrctrl.io_fraction = 33;

May be you look this driver. You have more experience.

May be in this lines do some power safe \ performance limit (divider)?

Edited by Genrix
Link to comment
Share on other sites

http://www.google.co...=utf-8&oe=utf-8

io_fraction:

What is GPU UI Fraction?

I asked faux to explain it since he can do it best :)

Quote

It's a ratio between how much time the GPU occupies the bus for graphic processing vs letting control the bus to other system devices who maybe hungry for system bus accesses

Default is at 33.

http://rootzwiki.com...post__p__107180

https://www.codeauro...92346255450c857

Here place many tweaks

https://www.codeauro...92346255450c857

Edited by Genrix
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.