Thursday, May 2, 2013

Cool Graphics Scene

This final thing was pretty cool where I tried to implement as many concepts as possible that I have learned in the graphics class. Here are some things I did:

SkyBox around the world.
Shadows of every object.
Environmental map on the water.
Normal map on house.

To move around use WASD for camera. USe IJKL to move light.

I don't have much to say about this assignment. But, there is a freakin' Zebra on my beach if you zoom in.





Graphics Assignment 12

This week's assignment was on Shadows. Shadows only work for opaque entities. To make the entities in your scene cast shadows you first need directional light. We use directional light for shadows because unlike a point light, directional light doesn't much change with the position of the light. To cast a shadow we need directional light's position and then conceptually like we do a ray cast, we create a shadow map of the scene from the light's position. It is the same as creating the view Depth map but from light's perspective. So, Instead of passing the world_to_view transform and view_to_projected transform matrices, we pass  world_to_light transform and light_to_projected transform matrices. Thus we get depth map from light's perspective(shadow Map).

   While drawing the opaque bucket, we calculate the object's position in light space and also light_to_projected space. We pass these values to fragment shader as textcoords. In fragment shader we pass the previously created shadow map as sampler2D. We W divide the projected space(from light) and get the coordinates from -1 to 1, to 0 to 1.We sample the shadow map at this value and take the x coordinate as it contains the depth. This gives us the previous depth. We compare the previous depth with new depth, that is z coordinate of object in light space. If the previous depth value is greater than the new depth value then that pixel is not in shadow otherwise it is in shadow.

PIX of shadow map from projected(light's) space:



Wednesday, May 1, 2013

Summer Plans and The future of Co-signers


The Co-signers team had a meeting just before the finals week to discuss the future of the game during the Summer break. We discussed what everyone is doing over this period of three months. Almost everyone was quite unsure about their plans. Everyone is still looking for internship. But a couple of engineers and producers are sure to stay are going to work on the game. We have decided to have a meeting again after a couple of months to see where everyone is with the internship and who all are sure to work on the game during summer. 
     I am still nowhere with the internship. I haven't heard from a lot of companies whom I had applied to, like EA, Activision and some iOS gaming companies. I am waiting for the finals week to get over so that I can start following up with these companies. Till then even I am unsure if I will be working on Co-signers at all. But I suppose by mid May I will have a clear idea, as everyone else on the team, about my summer plans. Either ways I am planning to work on Co-signers as much as possible during these three months.

Sunday, April 14, 2013

Working on Ping


I had taken the ping task for the week's sprint. Last week I implemented the Inventory System of Thief side in Co-signers. The Inventory System has a record of all the tools that the thief acquires. It also has a time based renewal system which increases the count of all the tools on a time basis. Things present in Inventory System of Thief side includes the following tools and gadgets:
·      Candy
·      Camera Bug
·      EMP
·      Flash Bang

Here is how it looks:




For ping I took the Theif's rotation, which unity provides in degrees, and set the compass rotation to that of the thief rotation. Kiran was working on the Ping on the Hacker side. We decided to pass the position of ping in 2D( X and Z as the Y co-ordinate doesn't matter) from the hacker side to the thief side through the network. Whenever a hacker pings on his side, the thief side interface gets a position of the ping. I  have written the thief side script to normalize the difference of the thief position and the ping position and which gives me the direction of the ping and then multiply it with the distance which appears on the GUI compass and updates every frame for 15 seconds. It is a design decision to keep the ping active for 15 seconds.

Friday, April 12, 2013

Graphics Assignment 11

Assignment 11 is about post processing. We draw the opaque bucket and translucent bucket to the a surface say, Postprocessing surface and then we apply effects like vignette or bloom to that texture.
   
  Steps of getting the Vignette Effect:
Get all the entities (Opaque and translucent) rendered on the postprocessing surface.
Use the texture of this surface in the as a uniform in the fragment shader of the postprocess entity (An entity having )
For a tex coordinate, sample this texture at that tex coordinate and also sample the vignette image( passed as uniform) and multiply these two float4 which is the output color.


Steps of getting the HUD:
Take a quad and set it to have a texture of the HUD to be displayed.
In vertex shader, resize the quad ( I am doing position in model space divided by 2 minus some offset to center it to the screen ) and passing that position to the hardware. 

Output:






Download Code

Graphics Assignment 10

The 10th assignment was on differed rendering. You hold the output of the opaque bucket and pass it to the translucent object bucket to create a fading effect at the intersection of the translucent object with Opaque objects bucket.

   Initial depth pass:
     The opaque bucket is asked to draw the depth of the opaque objects relative to the distance from the camera. We create a new surface for drawing this depth. Instead of using D3DFMT_X8R8G8B8 format for the surface, we take D3DFMT_R16F because the result we want to output from the fragment shader is depth and that requires only red channel to output to. To make the fragment shader output the depth of the opaque bucket to the viewDepth surface we set the render target to that surface instead of the back buffer. The texture is available from the viewDepth surface.

Draw Opaque Bucket:
    After we get the depth of all the opaque we can draw the opaque bucket to a different surface say, opaqueBuffer surface. The order of drawing opaque bucket and view depth doesn't matter.

Draw translucent Bucket:
  Set the render target to back buffer and Stretch rect the opaquebuffer surface and copy it to the backbuffer. Now calculate the distance from the camera of the pixel to be shaded  and compare it with the texel of the view depth texture( that is the depth of the opaque object in view space). We saturate the difference of these two pixel so that we get a value between 0 and 1. This then becomes the alpha value of the pixel.

To see the fading effect properly use the keys NUM8, 4, 2, and 6 which are mesh controls.

Final Output:








View Depth Texture:


Hardware Depth Buffer:



Download Code








Friday, April 5, 2013

Carrying forward with Unity and Prep for Alpha

As a team we have decided that we are going to use Unity for the game. From last post about "Do and Don't at GDC" I want to elaborate a little more about what the Industry looks in for a entry level programmer.

  So, after talking to professionals from different game companies made me really clear of what I have to do for the next one year to get into the gaming industry. First thing's first-Depth is more important than breadth. Professionals don't want you to have all the knowledge of the all the game engines. They don't even want you to know the technologies they work on. A reason why general CS majors also get game dev. jobs. But, what they really want is a Do-er. A problem solver. In case you get stuck somewhere they want you to solve or find a way of getting over that problem. Period. You be a problem solver and you will never be jobless, anytime.

Do what you do well. If you are a game tester and you want to change to game dev. then just excel at what you do. Get recognized for doing awesome work and you will get what you want. That is the way to do it. This doesn't apply to me but something to remember for later. 


At GDC play, I played some really good looking games and novel games. After playing one game ( I don't remember the name of the game), I talked to the developer of that game. The very first answer to my very first question baffled me. That game was made in Unity. 

So, all my misunderstanding about using Unity for my thesis game just got cleared. Two very important things got addressed -

  1. Professionals don't care if you do your thesis game in Unity.
  2. Unity games can look really beautiful.


 So, based on what I learnt at GDC I want to go for Unity without any doubts. Alpha is approaching and we need to start preparing for that, so that we can present the game well to the industry professionals at EAE Fest.