Click here to Skip to main content
15,867,594 members
Please Sign up or sign in to vote.
4.50/5 (3 votes)
See more:
Hello. I'm writing a basic raytracer for Mandelbulb fractal. I have a fair knowledge of raytracing but have never raytraced fractals. I imagine it is basically the same exept the intersection function would have to solve the equation for ray intersecting the fractal rather that an equation for a ray intersecting a sphere. The equation of the Mandelbulg is given in Wikipedia. So far so good but I couldn't solve the equation of a Ray intersecting the Mandelbulb myself. I am assuming that we have a ray R which starts at some point P and has direction vector D. Also in the formula for the Mandelbulb N and R are given. An exact solution would be really nice + some explanations/step by step solution as I don't want to just copy paste the solution. Also I don't know how to find the normal to the Mandelbulb at the intersection point.
also are there faster performance methods for approximating a Mandelbulb?.
So to sum it up:
1. How to solve Ray-Mandelbulb intersection equation
2. What is the Normal at the intersection.
3. Are there any faster performance methods for approximating a Mandelbulb?

[Edit] link fixed - was missing http://

[Edit:]
Thank you all for the replies. Here's what found out so far.
(I have 1 question left, it is summed up in the end of the edit)

1. We define a point p to be part of the Mandelbulb if for the the sequence Z = {z0=0, zN = (z(N-1))^k + p} dot(zN,zN) doesn't grow to infinity as N approaches infinity. As with the 2D Mandelbrot set it can be proven that if dot(zI,zI) > 4 for some natural number I then the dot(zN,zN) goes to infinity as N aproaches infinity.
(K is given, different Ks produce different Mandelbulbs, most popular one is K=8,
dot(a,b) is the dot product of vetors a and b,
if z is a vectro the formula for z^k is given in the wiki link above ).

In reality, we define a number I and calculate A = dot(zI,zI). if A<=4 then we assume the point is part of the Mandelbulb. Larger Is give finer details. Usually (according to various sites on Mandelbulbs) I==10 is enough (haven't tested it yet).

2. I'll assume whoever reads that doesn't have any idea what raytracing is (because the maths involved isn't specific to raytracing). If you do, skip to next paragraph. In ray tracing, you have a screen somewhere in space and you want to project (render) the world onto this screen. So what you do is cast a ray through each pixel, find what object "hits" this ray first, find some properties about the intersection point (color, distance to the pixel, normal of the surface, others ...) and calculate the color of the pixel from that properties.
In the Mandelbulb case we define a small distance E and then sample the ray at regular intervals with length E. The first point along the ray that is part of the Mandelbulb is where we "hit" it. Smaller Es give finer detail but slow down the calculation process.

3. I've found some optimization methods, but won't go into detail here. Doing the computations on the GPU must be noted however, because it speeds the process immensely, and it is a general optimisation when raytracing.

Summary:
1.I have a "heightmap" of the Mandelbulb. That is for each pixel the distance to the Mandelbulb is known. Now I want to make it "shiny". I won't be implementing the real properties of a fractal surface as it is just black (as noted in Solution 1). For the purpose I'll need the normal at the intersection point of that ray with this approximation of the Mandelbulb. I haven't found so far how to do this (I've Googled a lot).

PS: I wasn't sure if this is the right place to post this question as it might be seen as a new one.
PPS: I'll post some working demos of the whole thing with explanations and code (maybe in an article) as soon as I have them workig.
PPPS: To the author of Solution 1, I wasn't sure how to quote you: by your signiture which is just "SA" or by your name so I quoted it like this. If you feel like it shouldn't be like this please contact me to fix it.
Posted
Updated 11-Nov-12 18:24pm
v4
Comments
enhzflep 8-Nov-12 9:06am    
About 15 years ago I copy/pasted some code that would draw 4d fractals (3d fractals that changed with time)

The method that it used was to maintain a z-buffer for about 10 horizontal lines at a time. Once all rays intersecting these 10 rows had been fired, you'd have a z-buffer filled with depth values that could then be used to calculate the normal and hence the lighting of any point. Once done, the next 10 rows were raytraced, and so-on.
I'll see if I can find the code around anywhere.
Sergey Alexandrovich Kryukov 8-Nov-12 14:21pm    
Please see my answer and comment below -- it's might be pretty interesting.
--SA
enhzflep 8-Nov-12 21:50pm    
Ha!
There's an understatement, if ever I saw one.. "It _might_ be pretty interesting"

Absolutely fascinating! Thanks for letting me know there was an answer here (and a brilliant one at that, imho) :-) A 5 for sure.
Sergey Alexandrovich Kryukov 9-Nov-12 0:07am    
Thank you very much. If you have any questions on fractals -- welcome to ask. I'm a bit behind the newest, but know fundamentals pretty well...
--SA
enhzflep 9-Nov-12 2:29am    
You're more than welcome.

:face lights up:
Thank-you! A truly generous offer. :-)

I'll explain some mathematical idea behind it in in very simple terms, and than you could try to apply your knowledge of raytracing, but it is not going to be simple. It also depends on how realistic you are going to be in your model. The most realistic model is almost impossible to implement, so you might need some cheating, very usual in virtual reality though. :-)

Strictly speaking, for a fractal, the angle and the point of intersection between the surface and the ray of light and the surface of the fractal set is truly undetermined. This thing simply does not exist, even theoretically. How it can be? This is because the surface itself is something esoteric,; in particular, the surface measure is truly infinite. You should not be surprised too much when it comes to fractals.

[EDIT #3]

(To be able to define a local normal, you always need to have something called differentiable function. Please see:
http://en.wikipedia.org/wiki/Differentiable_function[^].

This is never the case with a fractal. In fact, you don't need to have a fractal to face a non-differentiable function. There is an uncountable set of non-differentiable functions not related to fractals in any sense of this word. The fractals are very special functions.)

[END EDIT #3]

When it comes to reflection of light, you should remember that in real-life physics it's possible to produce a piece of real material, something you can hold in your hand, which surface is best described using fractal mathematics. There are no true fractals in real life, but there are not such material objects as confined with a smooth surfaces as well: every solid matter under normal condition is composed of atoms, but for some object, fractal surface is just the best description. The fractal scaling goes down to a very small size, albeit not the size of the atom. And, it's important to understand that such objects have wonderful physical properties, including some optical properties.

If this characteristic size gets less then the light wavelength (which is the common case — the wavelength is something relatively big, you can safely consider it to be something like a half of micrometer), from the standpoint of optical property, such objects can behave like true fractals. There is nothing amazing about its look: typically, they just look as very, very black bodies; some surfaces were claimed to the blackest recorded objects. Therefore, the most realistic model of reflection would be the one considering diffraction of light on the fractal surface. I'm afraid you would go into nearly top-notch mathematical hardness if you want to study it:
http://www.sciencedirect.com/science/article/pii/S003960289800274X[^],
http://www.researchgate.net/publication/1867777_Elastic_Scattering_by_Deterministic_and_Random_Fractals_Self-Affinity_of_the_Diffraction_Spectrum[^],
http://www.researchgate.net/publication/225976935_Fractal_Characteristics_Investigation_on_Light_Scattering_from_Two_Dimensional_Rough_Surface[^],
http://www.researchgate.net/publication/26315277_Diffraction_by_fractal_metallic_supergratings[^].

Can it be simpler? Probably, but you will face just with unusual high volume of data processing (again, typical for fractal). The methods of raytracing use some approach not adequately describing real optics even for smooth surfaces not getting close to the wavelength. In the nature, there are no "rays", but you can model things using the idealized "rays" representing some narrow bunches of light. What is the nature of fractal surface? No one bunch is narrow enough. The whole idea of fractal is that it is scaled down to more and more details observed at higher magnification. If you can observe some total number of tiny feature at the limits of the resolution (say, visible as few pixels), and than increase the magnification, you will recognize as many feature on the magnified picture.

So, the cheating would be to re-render the fractal surface for each view of it, which is always done when fractals are visualized. This procedure always stops at some level of detail, otherwise you would calculate them infinitely. Then, you should evaluate the smallest characteristic feature of this "under-fractalled" surface. And use the "rays" of even smaller diameter. In other words, the diameter of a ray should shrink with the fractal view. In this case, you can use your usual methods of tracing. Not quite realistic, but probably would allow to get a better idea what a fractal is visually. I warned you, this is not easy at all.

[EDIT #1]

So the answers to your questions:

1 and 2: such things do not exist for fractal surface. They only exist for an "incomplete fractal", obtained from finite number of iteration used to build a model of a fractal. You can render such non-fractal surface for ever separate view and operate on this surface in a usual way.
3. Be happy if you can solve this problem even slowly. More seriously: the problem will incur unusually high volume of calculation. I would probably think about going in for using video card GPU.

[END EDIT #1]

[EDIT #2]

Perhaps my arbitrary use of the term "render" in "render of a surface" was confusing. This is absolutely not the same as your scene rendering. Here is what happens: with "normal surfaces" you can have a ready-to-track surface model and work with it when you want to render a scene for certain lighting and certain distance and orientation between a camera and a rendered object.

In fractal case, there is no such thing. A complete fractal object never exist in the computer, which is a finite-state machine. This is because representation of such objects "needs" infinite number of iteration, and its informational capacity is infinite. In a way, it only exists in our imagination. So, the rendering is two-stage: when you get a new distance and orientation between a camera and a rendered object, each time, you first need to build up an approximated model of the fractal surface, using some finite number of iteration. You should apply some criterion related to the level of detail you obtain by scaling into the fragment of this self-similar object. On second state, you can ray-trace it.

[END EDIT #2]

So, wish you the best of luck,
—SA
 
Share this answer
 
v5
Comments
Stefan_Lang 9-Nov-12 4:49am    
Excellent response. I was about to describe what it takes to calculate the intersection without thinking that for raytracing you also need the angle - and as you said that is impossible! But your suggestions how to get around that problem make sense.

As someone at CP (who? I don't remember) quotes in his sig: "This may work in practice, but it will never work in theory" ;-)
Sergey Alexandrovich Kryukov 9-Nov-12 11:26am    
Thank you, Stefan.
--SA
NKlinkachev 13-Nov-12 2:57am    
Thank you for your response. It is way more that what I'd imagined someone would answer :) I've updated the question with some progress and remaining unanswered parts. I'd be really happy (and thankful) if you could help me out :)
Sergey Alexandrovich Kryukov 13-Nov-12 10:20am    
You are welcome. You should understand that it would need some time as things are not so simple (however, at first glance, your new sections of the question look clear -- pleasure to deal with such an inquirer). Let me see and think...
--SA
Sergey Alexandrovich Kryukov 13-Nov-12 10:22am    
My question is: do you model diffuse scattering on the surface (effect of matte surface) and how? This is one of the key elements...
--SA
I've added this code here for completeness. While I don't really consider it a solution in any sense of the word, it does illustrate the idea I'd mentioned, of maintaining a zbuffer in order to calculate surface normals.

It's based on the Julia-Set, for what it's worth. Also, from a time when Borland was the only compiler I had. Goodness, think I still had Windows 3.11 when I last compiled it!

// 4DFRACTA.CPP
C++
#include <math.h>
#include <graphics.h>
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
//#include <iostream.h>

//#include <fstream.h>


//	int zant=450;   //z-resolution. bigger zant -> better resolution
//	int zant1=25;   //z-resolution. bigger zant -> better resolution
	int zant=50;   //z-resolution. bigger zant -> better resolution
	int zant1=3;   //z-resolution. bigger zant -> better resolution
	int pixsize=2, vissize=1;

	double xmin=-1.66+0.5, xmax=1+0.5;
	double ymin=-1, ymax=1;
	double zmin=-1.7, zmax=1.7;
	int iter=6;

	double lightx=-1, lighty=1, lightz=-3;

	double vx=0, vy=0, vz=0;

	double cr=0.50;  //constant real value
	double ci=0.40;  //constant imaginary(1) value
	double cj=1;  //constant imaginary(2) value
	double ck=0.05;   //constant imaginary(3) value
	double wk=-0.55;   //4th dimension

  int background = 0;

	int maxcolor = 16;

	int sx,sy;
	double dx,dy,dz;
	double origx, origy, origz;
	double rminx, rminy, rminz;
	double dxx, dxy, dxz;
	double dyx, dyy, dyz;
	double dzx, dzy, dzz;
	double dzx1, dzy1, dzz1;
	double tempx, tempy, tempz;
	double cosx,cosy,cosz,sinx,siny,sinz;
	double z_buffer[640][10];
	int buffer_y;

void rotate3D(double &x,double &y,double &z)
{
	x-=origx;y-=origy;z-=origz;
	double xy=y*cosx-z*sinx;
	double xz=y*sinx+z*cosx;
	double xx=x;
	x=xx;
	y=xy;
	z=xz;
	double yx=x*cosy+z*siny;
	double yz=-x*siny+z*cosy;
	x=yx;
	z=yz;
	double zx=x*cosz-y*sinz;
	double zy=x*sinz+y*cosz;
	x=zx;
	y=zy;
	x+=origx;y+=origy;z+=origz;
}

void rotatevalues()
{
	rminx=xmin;rminy=ymin;rminz=zmin;
	rotate3D(rminx, rminy, rminz);
	tempx=xmax;tempy=ymin;tempz=zmin;
	rotate3D(tempx, tempy, tempz);
	dxx=(tempx-rminx)/sx;dxy=(tempy-rminy)/sx;dxz=(tempz-rminz)/sx;
	tempx=xmin;tempy=ymax;tempz=zmin;
	rotate3D(tempx, tempy, tempz);
	dyx=(tempx-rminx)/sy;dyy=(tempy-rminy)/sy;dyz=(tempz-rminz)/sy;
	tempx=xmin;tempy=ymin;tempz=zmax;
	rotate3D(tempx, tempy, tempz);
	dzx=(tempx-rminx)/zant;dzy=(tempy-rminy)/zant;dzz=(tempz-rminz)/zant;
	dzx1=dzx/zant1;dzy1=dzy/zant1;dzz1=dzz/zant1;
}

double calc_l(double x, double y, double z)
{     //  (x,y,z,w)^2 =
		//					( x*x - y*y - z*z - w*w ,
		//					  x*y + y*x + z*w - w*z ,
		//					  x*z + z*x - y*w + w*y ,
		//					  x*w + w*x + y*z - z*y ) }
	double lengde;
	double temp;
	double w=wk;
	int m=0;
	do {
	temp=x+x;
	x=x*x-y*y-z*z-w*w+cr;
	y=temp*y+ci;
	z=temp*z+cj;
	w=temp*w+ck;

	m++;
	lengde=x*x+y*y+z*z+w*w;
	} while ((m<iter) && (lengde<2));
 return lengde;
}

double calc_angle(double w,double e,double n,double s,double cx,double cy,double cz)
{
	double lightdx=cx-lightx;
	double lightdy=cy-lighty;
	double lightdz=cz-lightz;

	double lightlength=sqrt(lightdx*lightdx+lightdy*lightdy+lightdz*lightdz);

	double fx=/*(0)*(s-n)*/-(e-w)*(dy+dy);
	double fy=/*(e-w)*(0)*/-(dx+dx)*(s-n);
	double fz=(dx+dx)*(dy+dy)/*-(0)*(0)*/;

	double flength=sqrt(fx*fx+fy*fy+fz*fz);
	double c=(fx*lightdx+fy*lightdy+fz*lightdz)/(flength*lightlength);
	return c;
}

void show_buffer(int y)
{
	double a;

	for (int t=1; t<sx; t++) {
	 for (int i=1; i<=8; i++) {
		if ((y+i)<sy) {
			a=calc_angle(z_buffer[t-1][i],z_buffer[t+1][i],z_buffer[t][i-1],z_buffer[t][i+1]
						 ,t*dx+xmin,(y+i)*dy+ymin,z_buffer[t][i]);
			if ((z_buffer[t][i]>zmax) && (background==0)) {
			 setfillstyle(1,0);
			} else if (a<0) {
			 setfillstyle(1,1);
			} else {
			 setfillstyle(1,1+(maxcolor-1)*a);
			}
			bar(t*vissize,(y+i)*vissize,t*vissize+vissize-1,(y+i)*vissize+vissize-1);
		}
	 }
	}


	for (t=0; t<640; t++) {
	 z_buffer[t][0]=z_buffer[t][8];
	 z_buffer[t][1]=z_buffer[t][9];
	}
	buffer_y=2;
}

void main()
{
	int pz, pz1;
	double l;

	int gdriver = VGA, gmode = VGAHI, errorcode;
//	errorcode = registerbgidriver((void(*)())EGAVGA_driver);
//				registerbgidriver(void (*driver)(void));
//	if (errorcode < 0) {
//		printf("Graphics error: %s\n", grapherrormsg(errorcode));
//		exit(1);
//	}

	initgraph(&gdriver, &gmode, "..\bgi");

	errorcode = graphresult();
	if (errorcode != grOk) {
		printf("Graphics error: %s\n", grapherrormsg(errorcode));
		exit(1);
	}

	for (int i=1; i<16; i++) {
	 setrgbpalette(i, 0, i*4, 0);
	 setpalette(i, i);
	}
	setrgbpalette(0,0,0,63);
	setpalette(0, 0);


//	sx=getmaxx()/pixsize;
//	sy=getmaxy()/pixsize;
	sx = 600;
	sy = 400;

	dx=(xmax-xmin)/sx;
	dy=(ymax-ymin)/sy;
	dz=(zmax-zmin)/zant;
	double dz1=dz/zant1;

	origx=(xmin+xmax)/2;
	origy=(ymin+ymax)/2;
	origz=(zmin+zmax)/2;


	int ve=1;
//  for (ve=0; ve<50; ve++) {  //only used when making animations
//	 vx=0;vy=0;vz=0;
	vx=vx/180*3.14159265;
	vy=vy/180*3.14159265;
	vz=vz/180*3.14159265;

	cosx=cos(vx);cosy=cos(vy);cosz=cos(vz);
	sinx=sin(vx);siny=sin(vy);sinz=sin(vz);

	rotatevalues();
	buffer_y=0;
	for (int py=0; py<=sy; py++) {
	 for (int px=0; px<=sx; px++) {
		tempx=rminx+px*dxx+py*dyx/*+pz*dzx*/;
		tempy=rminy+px*dxy+py*dyy/*+pz*dzy*/;
		tempz=rminz+px*dxz+py*dyz/*+pz*dzz*/;
		pz=0;
		do {
			tempx+=dzx;
			tempy+=dzy;
			tempz+=dzz;
			l=calc_l(tempx,tempy,tempz);
			pz++;
		} while ((l>2) && (pz<zant));
		pz1=0;
		do {
			pz1++;
			tempx-=dzx1;
			tempy-=dzy1;
			tempz-=dzz1;
			l=calc_l(tempx,tempy,tempz);
		} while (l<2);
		if (pz < zant)
			z_buffer[px][buffer_y]=zmin+pz*dz-pz1*dz1;
		else
			z_buffer[px][buffer_y]=zmax+dz;
		setfillstyle(1,15-pz/10);
		bar(px*vissize,py*vissize,px*vissize+vissize-1,py*vissize+vissize-1);
		if (kbhit()) break;
	 }
	 buffer_y++;
	 if (buffer_y==10) show_buffer(py-9);
	 if (kbhit()) break;
	}
	if (!kbhit()) {
	 show_buffer(py-buffer_y);
	 printf("\7");
	}
	char answer = getch();
//  }  //end of FOR-loop. Only used when making animations
	closegraph();
}


EDIT: See here for very quickly ported online demo of the code. (works best with Chrome. 4s render for chrome, 15s render for IE)
http://jsfiddle.net/enhzflep/K76jd/[^]
 
Share this answer
 
v2
Comments
NKlinkachev 13-Nov-12 2:59am    
Hi :) thank you for your reply :) Unfortunately I wasn't able to understand how you calculate the surface normals from the z buffer. Good work anyways :) and thank for taking the time to answer
Your question raised my interest - I hadn't heard of 3D Mandelbrot-related fractals before. After following some some links I found this site with awesome pictures and also a couple of links to 3D visualization software for 3D fractals on page 2. Maybe there you can find something useful for your purposes.
 
Share this answer
 
Comments
NKlinkachev 13-Nov-12 3:03am    
Hi thanks for your reply. The site looks interesting, lots of shiny pictures :) I wasn't able to understand much of the code of the visualization programs' code (didn't have much time to look at them) but I'll definitely explore them when I've done the basic 3D rendering.
In the calculation of the fractal you have the direction of the vector heading for infinity. After each iteration you have a complex number. You can consider the real and imaginary parts of the number as the direction and magnitude. Does this constitute some kind of "ray" that you can work with? I am not familiar at all with ray tracing and honestly the mathematical discussion of this article is far above me.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900