Thursday, May 2, 2013

Cool Graphics Scene

This final thing was pretty cool where I tried to implement as many concepts as possible that I have learned in the graphics class. Here are some things I did:

SkyBox around the world.
Shadows of every object.
Environmental map on the water.
Normal map on house.

To move around use WASD for camera. USe IJKL to move light.

I don't have much to say about this assignment. But, there is a freakin' Zebra on my beach if you zoom in.





Graphics Assignment 12

This week's assignment was on Shadows. Shadows only work for opaque entities. To make the entities in your scene cast shadows you first need directional light. We use directional light for shadows because unlike a point light, directional light doesn't much change with the position of the light. To cast a shadow we need directional light's position and then conceptually like we do a ray cast, we create a shadow map of the scene from the light's position. It is the same as creating the view Depth map but from light's perspective. So, Instead of passing the world_to_view transform and view_to_projected transform matrices, we pass  world_to_light transform and light_to_projected transform matrices. Thus we get depth map from light's perspective(shadow Map).

   While drawing the opaque bucket, we calculate the object's position in light space and also light_to_projected space. We pass these values to fragment shader as textcoords. In fragment shader we pass the previously created shadow map as sampler2D. We W divide the projected space(from light) and get the coordinates from -1 to 1, to 0 to 1.We sample the shadow map at this value and take the x coordinate as it contains the depth. This gives us the previous depth. We compare the previous depth with new depth, that is z coordinate of object in light space. If the previous depth value is greater than the new depth value then that pixel is not in shadow otherwise it is in shadow.

PIX of shadow map from projected(light's) space:



Wednesday, May 1, 2013

Summer Plans and The future of Co-signers


The Co-signers team had a meeting just before the finals week to discuss the future of the game during the Summer break. We discussed what everyone is doing over this period of three months. Almost everyone was quite unsure about their plans. Everyone is still looking for internship. But a couple of engineers and producers are sure to stay are going to work on the game. We have decided to have a meeting again after a couple of months to see where everyone is with the internship and who all are sure to work on the game during summer. 
     I am still nowhere with the internship. I haven't heard from a lot of companies whom I had applied to, like EA, Activision and some iOS gaming companies. I am waiting for the finals week to get over so that I can start following up with these companies. Till then even I am unsure if I will be working on Co-signers at all. But I suppose by mid May I will have a clear idea, as everyone else on the team, about my summer plans. Either ways I am planning to work on Co-signers as much as possible during these three months.

Sunday, April 14, 2013

Working on Ping


I had taken the ping task for the week's sprint. Last week I implemented the Inventory System of Thief side in Co-signers. The Inventory System has a record of all the tools that the thief acquires. It also has a time based renewal system which increases the count of all the tools on a time basis. Things present in Inventory System of Thief side includes the following tools and gadgets:
·      Candy
·      Camera Bug
·      EMP
·      Flash Bang

Here is how it looks:




For ping I took the Theif's rotation, which unity provides in degrees, and set the compass rotation to that of the thief rotation. Kiran was working on the Ping on the Hacker side. We decided to pass the position of ping in 2D( X and Z as the Y co-ordinate doesn't matter) from the hacker side to the thief side through the network. Whenever a hacker pings on his side, the thief side interface gets a position of the ping. I  have written the thief side script to normalize the difference of the thief position and the ping position and which gives me the direction of the ping and then multiply it with the distance which appears on the GUI compass and updates every frame for 15 seconds. It is a design decision to keep the ping active for 15 seconds.

Friday, April 12, 2013

Graphics Assignment 11

Assignment 11 is about post processing. We draw the opaque bucket and translucent bucket to the a surface say, Postprocessing surface and then we apply effects like vignette or bloom to that texture.
   
  Steps of getting the Vignette Effect:
Get all the entities (Opaque and translucent) rendered on the postprocessing surface.
Use the texture of this surface in the as a uniform in the fragment shader of the postprocess entity (An entity having )
For a tex coordinate, sample this texture at that tex coordinate and also sample the vignette image( passed as uniform) and multiply these two float4 which is the output color.


Steps of getting the HUD:
Take a quad and set it to have a texture of the HUD to be displayed.
In vertex shader, resize the quad ( I am doing position in model space divided by 2 minus some offset to center it to the screen ) and passing that position to the hardware. 

Output:






Download Code

Graphics Assignment 10

The 10th assignment was on differed rendering. You hold the output of the opaque bucket and pass it to the translucent object bucket to create a fading effect at the intersection of the translucent object with Opaque objects bucket.

   Initial depth pass:
     The opaque bucket is asked to draw the depth of the opaque objects relative to the distance from the camera. We create a new surface for drawing this depth. Instead of using D3DFMT_X8R8G8B8 format for the surface, we take D3DFMT_R16F because the result we want to output from the fragment shader is depth and that requires only red channel to output to. To make the fragment shader output the depth of the opaque bucket to the viewDepth surface we set the render target to that surface instead of the back buffer. The texture is available from the viewDepth surface.

Draw Opaque Bucket:
    After we get the depth of all the opaque we can draw the opaque bucket to a different surface say, opaqueBuffer surface. The order of drawing opaque bucket and view depth doesn't matter.

Draw translucent Bucket:
  Set the render target to back buffer and Stretch rect the opaquebuffer surface and copy it to the backbuffer. Now calculate the distance from the camera of the pixel to be shaded  and compare it with the texel of the view depth texture( that is the depth of the opaque object in view space). We saturate the difference of these two pixel so that we get a value between 0 and 1. This then becomes the alpha value of the pixel.

To see the fading effect properly use the keys NUM8, 4, 2, and 6 which are mesh controls.

Final Output:








View Depth Texture:


Hardware Depth Buffer:



Download Code








Friday, April 5, 2013

Carrying forward with Unity and Prep for Alpha

As a team we have decided that we are going to use Unity for the game. From last post about "Do and Don't at GDC" I want to elaborate a little more about what the Industry looks in for a entry level programmer.

  So, after talking to professionals from different game companies made me really clear of what I have to do for the next one year to get into the gaming industry. First thing's first-Depth is more important than breadth. Professionals don't want you to have all the knowledge of the all the game engines. They don't even want you to know the technologies they work on. A reason why general CS majors also get game dev. jobs. But, what they really want is a Do-er. A problem solver. In case you get stuck somewhere they want you to solve or find a way of getting over that problem. Period. You be a problem solver and you will never be jobless, anytime.

Do what you do well. If you are a game tester and you want to change to game dev. then just excel at what you do. Get recognized for doing awesome work and you will get what you want. That is the way to do it. This doesn't apply to me but something to remember for later. 


At GDC play, I played some really good looking games and novel games. After playing one game ( I don't remember the name of the game), I talked to the developer of that game. The very first answer to my very first question baffled me. That game was made in Unity. 

So, all my misunderstanding about using Unity for my thesis game just got cleared. Two very important things got addressed -

  1. Professionals don't care if you do your thesis game in Unity.
  2. Unity games can look really beautiful.


 So, based on what I learnt at GDC I want to go for Unity without any doubts. Alpha is approaching and we need to start preparing for that, so that we can present the game well to the industry professionals at EAE Fest.

Do and Don't at GDC

GDC was fun. I learnt a lot from the game developers whom I met at GDC. I had taken an expo pass which was valid only on Wednesday, Thursday, and Friday giving me access only to GDC pavilion and GDC play and the last day Career Seminar.

   I found Career seminars to be really useful. I learnt how to do good Resume and Covers. I learnt about what the recruiters and the company pros look at while interviewing a candidate. I also learnt a lot of other things which were not related to me but otherwise good knowledge.

At expo pavilion I met my favorite game companies like, Riot games, ArenaNet and Ubisoft. My goal was to meet people and ask them for jobs or for tips which will hep me get into the game industry. A lot of pros personally gave me resume advice and answered "how to get entry level jobs?" questions. 

 One of the most amazing experience was of talking to Bethesda's lead engineer Brett Douville whom I met at the Career Seminar. He told me that more than having knowledge of different engines, it helps to have knowledge of just one engine in greater depths. That, right there cleared my mind and after that I didn't bother about whether to use Unity or UDK for my thesis games. More on that later. But, I was really pleased to meet this man. We did talk about a lot of other things. I also met Mike Acton, a lead engineer at Insomniac. One thing is for sure- Gamers are the most awesome people.

So, My list of Do's and Dont's at GDC(It's small but precise)-

Do:
 Go for Bootcamps and Tutorials. You think they are useless but they are NOT.
 Open up and talk to anyone you see near you. Trust me you will get something out of it. If nothing   else you will definitely get enlightened.
 Go to the companies and people for their advice on your resume, portfolio.
 Visit Riot games. Those people and they way they talk will blow your mind.

Don't:
 Hesitate to talk.
Go to a company or a pro and ask them for a job or an interview.
Forget to attend Career seminar. You think you don't need it but you DO.
Lose your pass :P I am serious.




Tuesday, March 26, 2013

Reconsidering UDK

This week we are going to Game Developer's conference which is held in San Francisco every year. As our team Hack n Hide won't be working on The co-signers for a week we are re-considering UDK for our game.
         This discussion on switching to UDK started when our professors told us that the dates of the Alpha and Beta version of the game are flexible. We programmers had a meeting with Bob in which we talked about the challenges we will face if we try to switch engine at this point of time. We have always been wanting to use some other engine for the game but because Unity3D suited our needs the best we settled for Unity for our final game. But I personally think that the game will suffer graphically if we continue to go with Unity. Unity doesn't provide good graphics capability and the way this Engine works might give us hard time with maintaining the FPS to 60 at later phases. 

Pros of using Unity:


  • Unified asset pipeline. No need to spend time on resource subsystem at all, no buggy import routines to write and fix: just drop a file into folder, and it works.
  • Integrated level editor. No need to spend time on level tools: just get straight to business.
  • Great tweaking and debugging support: all your gameplay variables are shown right as you play, and can be changed on the fly too - and all this without writing a single line of code. Pause the game anytime, or step through code one statement at a time.
  • Quite comprehesive library of ready-made components. Rendering, sound, physics, controls - a lot of "boilerplate" code is already written.
  • Mono as a script host. While one can argue about merits of C# as a language, Mono's base class library offers a wealth of functions. Collections, I/O, multithreading, and insanely expressive LINQ all speed up development considerably.
Cons of using Unity:
  • Doesn't give you a feeling of making the game from scratch.
  • A bad mark on all the Engineers (Industry looks for candidates having UDK or Cry experience)
  • Graphically poor if Shaders are not written.


 So, now is the right time to research and meet the industry people at GDC and talk to them about the impacts of using Unity for your thesis game.


Friday, March 22, 2013

Graphics Assignment 9

The first part of this assignment was Environment mapping. Creating a new effect which uses cubemap as a texture to produce a good environment map is what I have done in this part of assignment. Basically, I have two meshes, a sphere and a plane both of which use the same material.

Calculating Environment Map:

First, calculate the reflection direction. For that reflection direction which is a Vector3 get the rgb value of the cubemap. That rgb value will have the value of the cube map depending upon the view direction. For instance, looking in +z direction should show the reflection of the +z side of the cube map on the floor beneath.
Using Fresnel effect and Schlik's approximation: The fresnel effect is used to get the amount of reflection and refraction of light by surface. I am using fresnel effect to get more reflection as the angle of view gets more perpendicular to the normal of the surface of reflection. 
Then Lerping the value between the reflected rgb color and the diffuse color for some constant will interpolate the value between these values to give out diffuse color.
Using that diffuse color as a part of the o_color, like before, should show the effect.


Wobbling Effect:

Passing the projected position to the fragment shader will give the 2D image of the screen co-ordinates of the meshes.
Converting that 2D image from -1 and 1 co-ordinates to Directx' co-ordinate system of 0 and 1 and also inverse of the y-axis of the image.
Creating a distortion with a sin modification and adding it to the the above step will give out a distortion per pixel.
Sampling the opaque buffer texture for that distorted co-ordinate gives out the cool wobbly effect.

Pix showing the opaque buffer texture:

Thursday, March 14, 2013

Spring Break means more Brainstorming!


I have been playing Co-signers for a long time with Vaibhav. Over this spring break we are brainstorming over new game ideas and trying to include cool mechanics and weapons. As it is the spring break I have a lot of time to complete my classwork backlog and play cool games, that includes Co-signers.
 
  While playing the game I couldn't resist but compare the thief side to Erie. Erie is so beautiful and that mechanic of spraying the wall to draw cool stuff and to remember directions is my personal favorite mechanic. I want The Co-signers to go similar lines and include some cool mechanics and gadgets for the Pointman. To start with maybe we can add cookies to the pointman side. The pointman can place a cookie near a guard which will attract the guard and by eating it the guard will go to sleep for 1 minute or so. Another thought is that the Pointman should have some control over the hacker like, after pointman should give the hacker some puzzles to solve. Or the hacker can't unlock the nodes further because he is asked for password at a node which only the pointman can give after reaching a certain room or making a guard sleep and then steal from him.

 I couldn't think much on the hacker side. It's because a 2D interface is new to me and designing a game based on that is novel to me. But the hacker side can get more gadgets or tools too on his side. Like, power to lock doors to safe the Pointman from the guards.

 I am not a designer but I can design and I love to think about the design of games.

Friday, March 8, 2013

Graphics Assignment 8

This week's assignment was on Normal maps. Basically the idea behind this concept is that we take the normal per vertex of the normal texture and multiply it with TBN ( Tangent, Binormal and normal) matrix in the fragment shader. TBN is a 3x3 matrix in that specific order that is, Tangent's xyz coordinate as first row, Binnormal's xyz as the second row and the Normal's xyz as the third row. We take the Tangents and Binormals per vertex of the mesh from Maya while exporting. 
  
  The tangent and binormal in model space are passed to the vertex shader. Tangent and Binormal in world space are calculated by multiplying with ModelToWorldTransformation, the similar way we did for calculating normals. Now, the TBN in world space are passed on to fragment shader. In fragment shader, another sampler is used to sample for the normal Map. That sample is multiplied with TBN matrix which gives out normal of the texture in world space. So, I am converting texture normal in the texture space to texture normal in the world space, which gives me the cool effect of bumpyness without having to deal with the complex geometry of the model.

Changes in MayaExporter:

  I now include tangents and bitangents in the text file while exporting the model from Maya.
  
 I have changed the Maya's right hand convention to left hand as Directx is left hand-

  •   By changing the z coordinates of the position, normal, tangent and binormal
  •   By negating the binormal xyz
  •   By changing the order of  indices, that is, the winding order
  •   By changing the V coordinate of the UV's to V=1-V
Mesh Movement: WASD QE
Light Movement: IJKL
Camera Movement: NUM8,4,2,6



The Co-signers

I am now working on The co-signers with team Hack and Hide. Choosing between Co-signers and Viynl was a difficult bet. I loved both the games equally well.

Things I love about Co-signers:

I find the idea of asymmetric game play novel.
The use of different consoles.
A pretty huge game and insanely challenging.
Good theme of students clearing their debts.

Things I love about Viynl:

It's a music game. I love it.
Probable use of different languages and platforms.

It was an impulsive decision of choosing Co-signers over Vinyl. I had to choose one and the last point on which my decision was based was "Which game would be more challenging?".

Friday, March 1, 2013

Graphics Assignment 7

This week I started re-structuring my code as John-Paul said it's hard coded and not completely data-driven. I have created a cScene class in which I have set light and camera, which was previously in cRenderer. Now nothing is hard-coded and when the entities data is changed in the text file, the project works without having to change any code. 

  As I prefer to continue on my own assignment instead of taking my classmate's assignment and working over it, the next part of the back log was sorting the materials and effects, a part of 5th assignment part B.  I used a simple sorting algorithm which sorts the effects first and then sorts the list of entity for same material. 

  For the 7th assignment I have implemented three different types of alpha blending. The first one is partial alpha blending in which I set the  SRC to SRCALPHA and DEST to DESTALPHA. For additive SRC to ONE and DEST to ONE. For, binary alpha, as required, I have turned off the alpha blending.

  I have sorted the opaque entities before loading them in mesh or effect lists. On every draw call I update only the entities with translucent material according to it's distance from the camera for proper alpha blending.

   I have also added  Q,E keys to change the camera position in Z-axis for testing the alpha blending.

 One problem that I couldn't quite solve was the flickering of the two cylinders. I am not sure but it may be because of the cylinder being high poly and as I am sorting it every frame for alpha blending to be proper, it takes the time to render.

  I tried to debug the project in PIX but it's giving me errors in both release and debug build. When I try to debug the release build, the Pix stops working. In debug build it shows the window and the meshes and everything but on exiting it doesn't produce the single frame capture log.

Original Code

Updated code

Saturday, February 23, 2013

Ready for showdown!

We are ready with our prototype for Rover Rescue. We have shown our prototype to the cohort 2 and they really liked it. Here are some very thoughtful comments from the cohort 2:
   
  •   The emotional connection between the player and game needs to be more stronger.
  •   It is more of a real time tactical game than real time strategy.
  •   Controlling each dog would be difficult.
  •   Better camera view so that the dogs are more visible and the player feels sad when a dog dies.

   

A lot of comments were focused on Zeph's presentation. Most of them gave advice to zeph and talked about the ways of making the presentation better. On Monday we are going to present our game to the industry panel. Waiting for it...

Friday, February 22, 2013

Graphics Assignment 6th and 5th part B



This assignment was pretty cool. Exporting the Maya models into the the project in text file was simple as all I did was parsed the code from vertexBuffer and indexBuffer into a txt file. After that the next big task was to convert the text file to binary file. I had to spent several hours to wrap my head around this concept. Doing this in c++ was really tough to grasp. I did a lot of research on Google on how to convert the file. I found some ways of doing it but when I implemented those concept, it didn't work as I was trying to take the entire file and storing it in the buffer with the help of a pointer. Then writing the entire thing to a binary file keeping ios::binary on.


     It took me around 10 hours to finish the 6th assignment.  For writing to binary file, I created a struct similar to s_vertex. I copied all the data from the text file to the struct and then wrote it to the binary file. So, now I have the data in binary file according to the s_vertex structure. In cMesh class, all did is to create a vertexData pointer of struct s_vertex and read the entire data in. Once executed, it worked like a charm.

After this I started my 5th assignment part A. I completed that with in 1 hour and as I am writing this blog its already 5:56 on clock. Time to submit.

Saturday, February 16, 2013

Graphics Assignment 4

This assignment was a nightmare. Like Seriously. But, when I was done with it I felt it was not that big of a deal. A mistake of updating all the meshes together and then drawing it together which messed up my cubes made it worse for me.

  Frankly it took me around more than 20 hours total to complete this one. First, I started creating  Parsers for the four files Effect, Material, Entity and scene. Then, I started structuring the code. I partitioned the code of renderer into cEffect which consists of the shaders and the related data. Everything related to vertex and index buffer is in cMesh.

My controls for camera are still the same, it's WASD for up, left, down, right respectively. My controls for light are still the same, it's IJKL for up, left, down, right respectively.   My controls for moving mesh are still the same, it's Num8,4,2,6 for up, left, down, right respectively.   


 I have added camera position and light details into the scene file. Other than all this, I have nothing new to write about. The major part I felt that I did was structuring and parsing. Setting Zbuffer and clearing it every draw call was something new.

PIX screenshot:




Friday, February 15, 2013

Graphics Assigment 5


I have some problems with my 4th assignment, which I am planning to submit ASAP. So, I took Vaibhav Bhalerao's 4th assignment  to do my 5th assignment on specular Lighting.


This was a simple concept and that's why I could do it in 2 hours.  I mostly tried to do Phong specular lighting. As we had already normalized light direction in the 3rd assignment getting reflected light with hlsl command reflect() was a cakewalk. Next was getting the normalized eye position. As discussed in the lecture, the last row of the worldToView matrix is the camera's or eye's translation. I just copied almost the entire algorithm from the reference.


 One thing I could not get right was the attenuation. I don't know if I was doing it right. when i tried using the attenuation value, every cube changed to white and lost textures.

  Understanding and adding the entities and textures for more cubes was fairly easy as the code that I was working on was quite well structured.

Download Link

Rover Rescue ready for show!

Working on Rover Rescue has been quite fun. We are using Unity3D for our RTS/RTT survival game. Unity3D is not the most ideal Engine to work on if your game is RTS. Unity is best for first/third person camera view as things are simplified for these camera views. For getting isometric view of RTS and building up your prototype with that view takes a little more effort. The reason we chose Unity3D is because it is a really powerful prototyping tool and developing a prototype with this engine will be easiest.
      
      I have been working on HP management of the characters. The game is based on dogs which are the main characters of the game fighting against Mars' environmental hazard to collect rover parts. While they are off collecting rover parts their food and air level reduces. For now, we are planning to keep only two HPs. As we brainstorm, we are planning to come up with more exciting and original HP levels. So, controlling every individual dog's HP level that is, food and air, is what I am working on. Whereas Yuntao is working on collecting different parts of the rover. For, now we are just going to show off the HPs by percent level in the HUD layer. 

Things I have done:

  • Ray casting for position detection.  
  • Reducing Air and food level of each dog at a constant level.
  • Increasing HP level, depending upon dogs collection of  food or air.
  • The change in speed of the dogs depending upon their air and food level.
  • Collection of resources, that is, air and food on the map.
   
   I and Yuntao, have pretty much merged our code with that of Yang's mini-map and Wang's level design. We are ready for showing our prototype to cohort 2.

Tuesday, February 5, 2013

Prototype for Cohort 2

I am finally in Zeph's group on "Mars Rovers". The team for now is Zeph (Producer), Alice (Artist), Me(Engineer), Yuntao (Engineer), Yang (Engineer) and Wang (Engineer). I picked this team because "Mars Rover" is an RTS game. I have grown up playing RTS games and I would love to contribute to an RTS game like "Mars Rovers". 

      Until a week back we were iterating on the game idea. We came up several themes and several mechanics. But, something didn't feel right as all of those themes and mechanics have been already instigated in many RTS games. I was really doubting the theme and mechanics at every stage of the iteration. I just didn't want to settle on one thing. So, we kept iterating until Roger helped us with some game theme. We again started iterating on the game idea, keeping Roger's suggestions in mind. 

   The game idea has a come long way. We now have an RTS survival game. Bringing RTS lovers and First person/Third person survival lovers together can be our research statement. 

    Since the past week we are working on the prototype that we have to show to the Cohort 2 next week. We are using Unity3D. We are using JavaScript,C# mostly to write all the scripts.

Sunday, February 3, 2013

Game pitch for IGF

    The semester has started and I just remembered (thinking about my last semester grades) that writing here is really important. So, talking about this semester we, the cohort 3, have to work on a game for almost one year and present it to the IGF mostly by the end of this year. 
   
       For this we were supposed to pitch a game idea. Everyone in the cohort 3 had to participate. We were given a week to polish our game ideas (as we were supposed to be thinking about our game idea since the first semester). I, frankly didn't have any idea. I did not want to pitch my thesis game idea of the first semester. I wanted to come up with a novel concept. I really was thinking for the big thing- the IGF.
  
    I did a lot of research on what games go into IGF and which ones really win. The whole week I just did that and was trying to think about getting a new idea, IGF-like idea. New ideas don't just pop up in your brains. They neither come when you try to copy other game ideas. As Jonathan Blow says "Let me take my deepest vulnerabilities and put it in a game". And like Tracy Fullerton says "game ideas come when you try to relate them to your own life". 
   
    It was Monday 9:00 PM and I was still not sure about what to do. I asked my classmates if I can join their group, as participation was mandatory. But deep down I didn't feel like getting into someone else' idea just because they were not mine. I decided that however bad I suck I will be presenting my own pitch the next day. The only problem was that I didn't have any thing to present.

     Keeping Jonathan blow and Tracy Fullerton in mind, I knew its inside me and I just have to take it out. I tried to think about my deepest vulnerabilities and then and there my whole game (Footprints) was right infront of me. I worked around the whole night to get it right, to eliminate all the possible flaws of the game idea. It was 5 AM when I decided to sleep.
   
     Wise men have said "Don't sleep for 2 hours when you have a presentation the next day". But I am wiser. I got up at 7 AM and started making my Prezi slides. Everything was right in time and place, except me. I felt miserable and starting doubting this whole thing and where it was going. I couraged-up and presented my game idea. 


I just have one word to say on how my presentation was- DISASTROUS. Then and there I knew that I just ruined the whole thing. 

My hidden intentions behind writing this post really detailed must be to gain some sympathy. I just know that I tried really hard to put my idea in front of the audience, regardless.




Friday, February 1, 2013

Graphics Assignment 3

 Download Code

    I found this assignment pretty easy as compared to the previous one. I have added a texture (.png) image on the cube. I used D3DXCreateTextureFromFile command to load the texture onto the device (IDirect3DDevice9 object). Then set the texture using setTexture command. In vertex shader I just set o_texture=i_texture. It didn't need any changes in vertex shader. In fragment shader, I created a sampler2D object and applied it to the o_color. Applying texture just needed a couple of LOC.

Screenshot of PIX:



Getting point light effects needed just getting the world position of the cube. In fragment shader, I normalized both the world position of the box and the point source of light. Taking a dot product and multiplying it with the o_color gave me the those light effects. I used a simple if branch command to clamp the negative value of dot  product. The light position can be changed with I[up],J[left],k[down],L[right].

I have also taken care of the linker errors in release build. Now, the project works fine in release build.

Friday, January 25, 2013

Graphics Assignment 2


Download Link: Download Code

I had a tough time with this assignment. The most challenging part was understanding the transformations. Another problem that I faced was with understanding the HLSL part and get it working. I couldn't figure out for a long time on how to get colors in my cube as my cube was  appearing all white on all sides. I then tried to change the o_color = i_color and it worked.
     Getting the camera and the cube moving around was a cakewalk after understanding all the transformations. I found that PIX is really useful in debugging if you know how to use PIX. I am getting better at PIX and in this assignment I learnt a lot about different things that can be done. I spent around 3 days to figure out the transformation and how it works. But now that I am done, everything looks easy.

I was not able to fix the project when run in release build but I will try to remove those ASAP and I will make sure it builds properly in the next assignment.

My cube.txt file looks like this:

8 12
-1.0 -1.0 -1.0 0 255 60
-1.0 1.0 -1.0  0 255 0
1.0 1.0 -1.0 0 0 255
1.0 -1.0 -1.0 25 0 0
-1.0 -1.0 1.0 30 0 255
-1.0 1.0 1.0 0 44 255
1.0 1.0 1.0 255 0 0
1.0 -1.0 1.0 0 255 55

0 1 3
2 3 1
2 7 3
2 6 7
6 5 7
5 4 7
5 1 4
1 0 4
5 2 1
5 6 2
4 0 3
4 3 7

The pre and post vertex PIX shots:



Friday, January 18, 2013

Graphics Assignment 1

Link to code: http://www.sendspace.com/file/xznpk7

I created a .txt file to save coordinates of rectangle. The .txt file looks something like this:


-0.5f,
-0.5f,
-0.5f,
0.0f,
0.0f,
-0.5f,
0.0f,
-0.5f,
-0.5f,
0.0f,
0.0f,
0.0f,

The file parser takes a value for x and y co-ordinate from each line. So, the first value of the file is the x co-ordinate of the first vertex followed by y co-ordinate of the first vertex and so on..

I saw different patterns when I tweaked on the values of the commented part of Pixel and Vertex shaders. 



I found it difficult to display the rectangle by using two triangles. Changing the value of the buffer, that is, changing the size of buffer to 2 from 1, and also changing the value of PrimitveCount to 2, I was able to render the rectangle on the screen.