Tuesday, September 24, 2013

Using Oracle, Savepoints in Python for unit testing

Perhaps you have wondered how in Oracle you can create a unit test, modifying the database temporarily then rollback the changes when you have completed your test.

The goal:
- To test a procedure that will handle most of the work

Here are the barriers I encountered during the design:
A- Should I test strictly in the database with pluto, oracle's tools or via python?
B- What exactly is the behavior of the are transactions handled in Oracle?
C- How can I be sure what I use in python won't commit automagically?
D- How can I guarantee that even if it commits it will be rolled back?
E - Is it even possible to perform a nested transaction such that I can run my tests in a unit test

Analysis
A - In either case the result would be an acceptable test, but I made the decision to test the procedure with Python.  Why? Because its use in this system would be run through python, therefore it would be superior if python is used because it is closer to the true

B,C,D,E - Oracle permits nested transactions.  However, when it comes to verifying those changes, only the parent can read the changes uncommitted in memory.  Therefore if I can run raw SQL, I should be able to nest things in a way that I know who the parent is and can roll it back.

I tried varied methods of attack.  What I found is that for the cx_oracle package, you effectively must use the oracle cursor to set up transactions and raw sql.

F - Then I encountered a new barrier: How can I structure the transaction and setup statically so it doesn't have to be rerun each time and I keep the same session?
G - Also, how can I prevent implied commits and whether or not that is even required?
For F, I discovered that we basically can model a sort of singleton pattern where we create some static variables such that they can be initialized by the class. Now you may wonder why this is a question at all.  The reason is that if we modify the __init__ of the class we risk breaking the normal functionality of how the inherited class is called.  If we break that, how can we be sure that whatever unittesting system we put on top will be able to properly call our UnitTest.TestCase?  People don't like breaking the build, but how about breaking the build system.  Searching for this yeilds a lot of noise, so f* that noise.  We dodge the __init__ with a static var checked and set in a function, which we put into the setUp that is called before each unit test.

class TestMagicBankTransferBasicCase_Sprint5_Item2(unittest.TestCase):
      ranonce = None
     def performOneTimeSetup(self):
         if not TestMagicBankTransferBasicCase_Sprint5_Item2.ranonce:
             #connnect to db
             TestMagicBankTransferBasicCase_Sprint5_Item2.connection = somedbutil.connectToDev('cx_Oracle',password)
             if not somdbutil.connected():
                 sys.exit('Error: Could not connect to db')
             TestMagicBankTransferBasicCase_Sprint5_Item2.sql_every_function_needs = "call myprocedure.thisisfnnuts(%s,%s)"
             TestMagicBankTransferBasicCase_Sprint5_Item2.rollback_directive = "ROLLBACK TO TMBTBC52"
    def setUp(self):
         #do this, it is the right thing to do
         unittest.TestCase.setUp(self)
         self.performOneTieSetUp()
         #Part of F's answer, not tested to verify it is required
         TestMagicBankTransferBasicCase_Sprint5_Item2.connection.autocommit = 0 #assuming this exposes the oracle connection object
         self.conn = TestMagicBankTransferBasicCase_Sprint5_Item2.connection
         cursor = self.conn.cursor()
         savepoint_directive = "SAVEPOINT TMBTBC52" #not tested for naming rules in oracle
         cursor.execute(savepoint_directive)
   

    def test_transfered(self):
           #call my proc
           #run tests

     def tearDown( self):
          #again, do this, it is the right thing to do
         unittest.TestCase.tearDown(self)
         cursor = self.conn.cursor()
         cursor.execute(TestMagicBankTransferBasicCase_Sprint5_Item2.rollback_directive)

Why did I blog on this?

Not a single reference to the actual use of savepoints with Oracle and Python.

Have fun storming the castle.

Thursday, July 11, 2013

Commentary on management; In the house yesterday

Compelled to comment on the management of a high value Nabeel's blogCTO. The article is about working under a degrading "merit system". As an employer, your communications must be clear, and exceptions should only go to people when there is balanced risk.

I got to spend a little time in the House of Reps yesterday for a startup related event. It was an enjoyable time, and Congressman Polis has earned a lot of respect from me for his genuine efforts to engage the startup community; this being the second event I have heard him speak.

Wednesday, July 10, 2013

Treefrog web framework (C++) and mongo models; contributing some vision

Treefrog is one of the few C++ frameworks that have emerged with good documentation and a dedicated development team. I personally like it so far, predominantly because it works and for C++ it is easy. Note file references are the way they are because I am on a Mac trying things out.

There really is no reason we can not perform CRUD operations with mongo, we just sort of need to standardize the crud operations. However, in 1.6.1 even with QT >= 5, treefrog is only mostly there. I don't really expect scaffolding tools; NoSQL is rather loosely defined after all, just to be able to create model abstraction.

more /Library/Frameworks/treefrog.framework/Versions/1/Headers/tabstractmodel.h
#ifndef TABSTRACTMODEL_H
#define TABSTRACTMODEL_H

#include 
#include 

class TSqlObject;
class TModelObject;


class T_CORE_EXPORT TAbstractModel
{
public:
    virtual ~TAbstractModel() { }
    virtual bool create();
    virtual bool save();
    virtual bool update();
    virtual bool remove();
    virtual bool isNull() const;
    virtual bool isNew() const;
    virtual bool isSaved() const;
    virtual void setProperties(const QVariantMap &properties);
    virtual QVariantMap toVariantMap() const;

    static QString fieldNameToVariableName(const QString &name);

protected:
    virtual TSqlObject *data() { return NULL; }  // obsolete
    virtual const TSqlObject *data() const { return NULL; }  // obsolete
    virtual TModelObject *modelData() { return NULL; }
    virtual const TModelObject *modelData() const { return NULL; }

    virtual TModelObject *mdata();
    virtual const TModelObject *mdata() const;
};


inline QString TAbstractModel::fieldNameToVariableName(const QString &name)
{
    QString ret;
    bool existsLower = false;
    bool existsUnders = name.contains('_');
    const QLatin1Char Underscore('_');

    ret.reserve(name.length());
more /Library/Frameworks/treefrog.framework/Versions/1/Headers/TreeFrogModel 

So having saw this I though, great, I can probably make my own mongodb model. Then I thought - wait, I better check on that.

#include "tabstractmodel.h"
#include "tcriteria.h"
#include "tcriteriaconverter.h"
#include "tfnamespace.h"
#include "tglobal.h"
#include "tmodelutil.h"
#include "tsqlormapper.h"
#include "tsqlormapperiterator.h"
#include "tsqlobject.h"
#include "tsqlquery.h"
#include "tsqlqueryormapper.h"
#include "tsqlqueryormapperiterator.h"
#include "twebapplication.h"

#if QT_VERSION >= 0x050000
#include 
#include 
#include 
#include 
#endif

So I spoke with the developer and he said he was planning to expand the model class to permit mongo as well.

ao27 via lists.sourceforge.net 
9:16 PM (12 hours ago)

to treefrog-user 
Hello Mehmet,

To access MongoDB server easily, I considered that the framework
should execute it
through TAbstractModel class.

I am going to modify TAbstractModel class so that its object can have not only a
TSqlObject object but also a TMongoObject object.
That means that TAbstractModel class can access MongoDB server.


  [ Inheritance Hierarchy ]

        TModelObject
             |
   +---------+--------+
   |                  |
TSqlObject        TMongoObject

A TAbstractModel object has the pointer to a TModelObject object,
can perform CRUD operations.

Regards,

aoyama

This guy is awesome. If he acts on it, it wouldn't be the first time he indulged me. Thanks Aoyama!

Still, I can't be sure what he plans to do. I presume it shouldn't be too tough, because he made both TSqlObject and TMongoObject use QVariant maps, which suggests to me we may have been thinking alike all along.

Friday, July 5, 2013

Pion continued {fail}

We're working with command line tools first

./autogen.sh
glibtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, `m4'.
glibtoolize: linking file `m4/ltmain.sh'
glibtoolize: putting macros in AC_CONFIG_MACRO_DIR, `m4'.
glibtoolize: linking file `m4/libtool.m4'
glibtoolize: linking file `m4/ltoptions.m4'
glibtoolize: linking file `m4/ltsugar.m4'
glibtoolize: linking file `m4/ltversion.m4'
glibtoolize: linking file `m4/lt~obsolete.m4'
configure.ac:21: installing 'm4/compile'
configure.ac:24: installing 'm4/config.guess'
configure.ac:24: installing 'm4/config.sub'
configure.ac:15: installing 'm4/install-sh'
configure.ac:15: installing 'm4/missing'
services/Makefile.am: installing 'm4/depcomp'
parallel-tests: installing 'm4/test-driver'
./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... m4/install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking whether gcc and cc understand -c and -o together... yes
checking build system type... x86_64-apple-darwin11.4.2
checking host system type... x86_64-apple-darwin11.4.2
checking how to print strings... printf
checking for a sed that does not truncate output... /usr/bin/sed
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for fgrep... /usr/bin/grep -F
checking for ld used by gcc... /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
checking if the linker (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
checking the name lister (/usr/bin/nm) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 196608
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking how to convert x86_64-apple-darwin11.4.2 file names to x86_64-apple-darwin11.4.2 format... func_convert_file_noop
checking how to convert x86_64-apple-darwin11.4.2 file names to toolchain format... func_convert_file_noop
checking for /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... dlltool
checking how to associate runtime and link libraries... printf %s\n
checking for ar... ar
checking for archiver @FILE support... no
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm output from gcc object... ok
checking for sysroot... no
checking for mt... no
checking if : is a manifest tool... no
checking for dsymutil... dsymutil
checking for nmedit... nmedit
checking for lipo... lipo
checking for otool... otool
checking for otool64... no
checking for -single_module linker flag... yes
checking for -exported_symbols_list linker flag... yes
checking for -force_load linker flag... yes
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fno-common -DPIC
checking if gcc PIC flag -fno-common -DPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) supports shared libraries... yes
checking dynamic linker characteristics... darwin11.4.2 dyld
checking how to hardcode library paths into programs... immediate
checking for dlopen in -ldl... yes
checking whether a program can dlopen itself... yes
checking whether a statically linked program can dlopen itself... yes
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
checking for doxygen... no
configure: WARNING: doxygen not found - will not generate any doxygen documentation
checking for perl... /usr/bin/perl
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking for ld used by g++... /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
checking if the linker (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
checking whether the g++ linker (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) supports shared libraries... yes
checking for g++ option to produce PIC... -fno-common -DPIC
checking if g++ PIC flag -fno-common -DPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) supports shared libraries... yes
checking dynamic linker characteristics... darwin11.4.2 dyld
checking how to hardcode library paths into programs... immediate
checking for C++ compiler vendor... gnu
checking for OSX binary architectures... no
checking for specific CPU architecture... no
checking for debugging... no
checking for plug-ins directory... /usr/local/share/pion/plugins
checking for boostlib >= 1.35... configure: error: We could not detect the boost libraries (version 1.35 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option.  If you are sure you have boost installed, then check your version number looking in . See http://randspringer.de/boost for more documentation.
tried locate boost.hpp - because I know headers labelled like that tried locate *.hpp - found instance of location https://sites.google.com/site/alexeyvakimov/mini-tutorials/programming-boost-python-c See How to get started
export BOOST_ROOT=/opt/local/include/boost
No dice
checking for boostlib >= 1.35... configure: error: We could not detect the boost libraries (version 1.35 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option.  If you are sure you have boost installed, then check your version number looking in . See http://randspringer.de/boost for more documentation.
http://randspringer.de/boost Looked up http://www.randspringer.de/boost/upt.html

Decided to try the config flag.

 ./configure --with-boost=/opt/local/include/boost

No dice

Checking my version of boost for my sanity

 grep BOOST_LIB_VERSION /opt/local/include/boost/version.hpp
//  BOOST_LIB_VERSION must be defined to be the same as BOOST_VERSION
#define BOOST_LIB_VERSION "1_52"

Screw it, let's try macports

sudo port selfupdate
sudo port upgrade outdated

Many package updates later.....

Having a lot of trouble getting configure to pay attention to the BOOST_HOME variable and --with-boost, so I am taking the path of least resistance here; which I think is probably Xcode. It is likely on a linux box or vm the libraries would be in the right place.

Now I went to try this in Xcode which is Version 4.5.2 (4G2008a)

After attempting to fix the command line version with --boost-lib= to no avail, I brought it into Xcode using Header Search Paths under the Build Settings tab in the main window when you click on the project. This is located under the heading "Search Paths", but if you got overwhelmed with these settings there is a wonderful little search in the top right corner that I use to pick out the right settings.

You hit the plus ,but, don't worry about specifying a specific architecture or sdk, just type in the path.

You can use the command line tool "locate" to help.

However, after doing this Xcode complains about not finding a log4cpp header. What happens often enough is that some tools are so common and documentation is so scattered that it becomes hard to tell whether we left directives to help guide people.

This probably is easily enough remedied: we can port install log4cpp (and since the compiler is looking for a header, we shouldn't look for a binary version but a development version. If you are using CPP under *nix (Linux,Unix,Mac) variants, this should become something you get familiar with).

port search log4cpp
log4cpp @1.1 (devel)
    configurable logging for C++

log4shib @1.0.8 (sysutils, shibboleth, devel)
    configurable logging for C++, fork of log4cpp

Found 2 ports.

So here we clearly want log4cpp (devel).

sudo port install log4cpp
Now we'll get, after authenticated
--->  Fetching archive for log4cpp
--->  Attempting to fetch log4cpp-1.1_0.darwin_11.x86_64.tbz2 from http://packages.macports.org/log4cpp
--->  Attempting to fetch log4cpp-1.1_0.darwin_11.x86_64.tbz2.rmd160 from http://packages.macports.org/log4cpp
--->  Installing log4cpp @1.1_0
--->  Activating log4cpp @1.1_0
--->  Cleaning log4cpp
--->  Updating database of binaries: 100.0%
--->  Scanning binaries for linking errors: 100.0%
--->  No broken files found.

Now because I am lazy I'll try just running it, perhaps the search paths are already good enough. If not, we'll edit them.

I tried

locate logger.hpp

But locate hasn't been updated so it won't find it. I look for the log4cplus in the same include area and I find it.

Oops, looks like I installed the wrong library! It should be log4cpluss, but I installed log4cpp.

sudo port install log4cplus
Password:
--->  Computing dependencies for log4cplus
--->  Fetching archive for log4cplus
--->  Attempting to fetch log4cplus-1.0.4.1_0.darwin_11.x86_64.tbz2 from http://packages.macports.org/log4cplus
--->  Attempting to fetch log4cplus-1.0.4.1_0.darwin_11.x86_64.tbz2 from http://lil.fr.packages.macports.org/log4cplus
--->  Attempting to fetch log4cplus-1.0.4.1_0.darwin_11.x86_64.tbz2 from http://mse.uk.packages.macports.org/sites/packages.macports.org/log4cplus
--->  Fetching distfiles for log4cplus
--->  Attempting to fetch log4cplus-1.0.4.1.tar.xz from http://superb-dca2.dl.sourceforge.net/log4cplus
--->  Verifying checksum(s) for log4cplus
--->  Extracting log4cplus
--->  Configuring log4cplus
--->  Building log4cplus

Ok, now we should be good!

But hark, yet another error (and a couple of places that weren't cast yield warnings).

ld: file not found: ~/.cloudmeter/openssl-1.0.1c/lib/libssl-pic.a
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Ok looks like we probably need a link to an ssl library from openssl. We probably need to change a reference in our build.

So we should research this, after looking around I don't have a stock version of this library.

hmm, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=246928

So perhaps we need a pic version?

Searching around I found this page in the docs Wiki for Pion

This page indicates that we don't need openssl, let alone a pic version. I'd rather have the SSL library

At this point there are variables in this that point to my home directory .cloudmeter/library, were I to continue, I'd probably have to dig into modifying these, but the destination for such a server ins't going to be on a Mac, it would likely be a linux box. So I think I will abandon this and pick it up some other time.

Pion appears to offer cheap Boost library web services. Reviewing the source may be worthwhile in the future.

Wednesday, July 3, 2013

Monday, July 1, 2013

A moment to poke fun at the GOOG

Hey we all love the GOOG right? I mean they have flawless execution, especially in their production environments. Ok, you probably know I am being a bit sarcastic.

Especially when we see that they can be as silly as the rest of us.

So I thought, maybe this was a recent little test a googler is running

2012-12, maybe they never got retraction for blogger announcements working! Whoops!

We live in an imperfect world. ♥ ya google! That isn't sarcasm.

revoltdc hackathon 20130622 [iteration 3 ] {Success}

Executive Summary

Attempted/Accomplished

  • Decided to do an isolated http.call instead of the double dependent http.call and get it right first
  • Identified a solution to using the router using autorun to avoid an infinite rendering loop
  • Discovered a bug that I needed to resolve blocking my use of reactivity.

Lessons learned

  • LESS files must be stored in the client directory to be served
  • How to use a local collection
  • A good use of autorun
  • About a condition for infinite rendering

Questions and Todos

  • Find a way to test the existence of an image on another server before setting an image location (perhaps by looking at a status code via an http.call)
  • Better explore the relationship of the rendering chain when using Meteor.Router

The details

Changeset here. If you choose to review this changeset, don't get thrown off: there are the addition of a few images. Since SVG is stored as in an xml-ish format, you'll see a lot of not particularly meaningful changes. We are still on meteor 0.6.4, but now we are also using less. Less is one of a group of pre-compiler class CSS tools, so we can use less (no pun intended) CSS code to do more in a uniform manner. This helps enable us to use consistent CSS throughout our application at the cost of pre-compilation; but the beauty of having it as a meteorite package is that meteor will handle the pre-compilation for us automagically. In general, it is important to note that with coffeescript and less, if we are given errors once these scripts are deployed, traceback will also not line up directly with the numbering so you need to look at errors with a general understanding that this is the case. Finally remember if you check out the code that you'll need a sunlight api key of your own and that it goes in the root / Server.coffee file.

In this iteration I decided to take a little break from the multiple calls to sunlight on the server and just get a single call working right first. This call is for biographic information for a legislator.

So here is a case where we succeed, but it isn't really all that clear what the dependencies are. Therefore we will have to remove all possible dependencies to determine the truth or do further research.

Somewhere along the way I discovered that during rendering a "party" variable was undefined, and javascript interpretation for the function was halted on this error. We had a bug lingering in the code, and this bug revealed a few things to me:

  • I need to think more "lisp"y or (in other words) recursively: as reactive changes occur, templates get rerendered, so we need to be extra careful about the initial states of variables in these templates.
  • We can't guarantee there will be any initial population of the variables we are using, and if javascript on the client breaks, we lose our updates
  • The reactivity of a local cursors works fine with set and get Session variables on the client.

So here is what success looks like

Now the post here works, but its still very much a hack job. We pull the image using the govtrack id from the govtrack project to display it here, but when it isn't available we get a broken image. We have some code to post parties, but I don't know all of the codes sunlight uses. What happens when we get something more obscure for the party, like the justice party? So these links should be tested and need a default for not being able to fetch an image. In short there is obviously a lot to fix, but it is nice to see things working to some degree

Of note in the image, party rendering is called a few times, and we can see the fetched row from our cursor pointing to mini-mongo. Please note there is a distinction here - mini-mongo is created on the client only! Take a moment to examine the hints from the documentation or just trust the following statement

"The name of the collection. If null, creates an unmanaged (unsynchronized) local collection."

So let's look at how this is accomplished

Here is the file at this point in it's history. Time to tear it up for my edification (and possibly your own):

#somewhere close to the beginning in Client.coffee under the root/client subdirectory
#Establish a null local collection, ooh la la
localbiocollection = new Meteor.Collection(null)


#let's observe, for the lulz, the collection as items are added
localbiocollection.find().observe(
        added: (doc,beforeIndex)->
            console.log(Session)
            console.log(doc)
)

So at this point we have declared a collection available to the client. This creates an empty mini-mongo db client side we can store to. I have a lot of questions about this: is this basically html5 only? What is the compatibility of mini-mongo? I guess time will tell. I am testing in a fairly modern version of Chrome.

Ok, now that we accepted this magic on faith, let's look at a bug that consumed me for a while. I didn't investigate the error thrown by an undefined "bio" in the variables below, but that threw me off for a while. Our helpers are called during rendering and let us populate corresponding handlebar template variables (with what we return)

#somewhere in Client.coffee under the root/client subdirectory
Template.bio.helpers
        result: ->
            console.log("party rendering")
            #at this point we want the only record in this collection and we want it now, so we call fetch
            bio = localbiocollection.find().fetch()[0]
            console.log(bio)
            return bio
        party:->
            console.log("party rendering")
            #at this point we want the only record in this collection and we want it now, so we call fetch
            bio = localbiocollection.find().fetch()[0]
            #before this "if", everything was broken!!!!!
            #I could not coun't on my biocollection being populated before rendering
            if bio
                party = bio.party    
                img_url = "/img/parties/"
                if party == 'R'
                    img_url +="republican.svg"
                else if party == 'D'
                    img_url +="democratic.svg"
            else
                img_url = "/img/hamsterload.gif"
            return img_url

That if statement is very important. If we try to do things we shouldn't with a variable and it throws an error, the interpretation halts and our code doesn't update things properly. Javascript can be very forgiving and just as unforgiving.

Let's take a quick look at the template and bio.less files.


Let's take a moment to look at less. Based on the syntax highlighting can you guess the code that is "less" specific? Hint: this is css highlighting. Note that despite your training, even though a LESS file is css, probably because of the pre-processing involved LESS files are not to be stored in the public directory. bio.less is in the root client directory!

/*Located in the CLIENT directory */
.rounded-corners (@radius: 5px) {
  -webkit-border-radius: @radius;
  -moz-border-radius: @radius;
  -ms-border-radius: @radius;
  -o-border-radius: @radius;
  border-radius: @radius;
}

#bio_img {
  position: fixed;
  left: 0;
  top: 20em;
  width: 20em;
  margin-top: -2.5em;
  .rounded-corners (25px)
}

#bio_party_img{
  position: fixed;
  left: 125em;
  top: 20em;
  width: 20em;
  height: 20em;
  margin-top: -2.5em;
  .rounded-corners (25px)
}

body{
 font-family: "Book Antiqua",Georgia,"MS Sans Serif", Geneva, sans-serif;
 background: #A3181E url(/img/bg_body.gif) fixed repeat-x;
 margin: 0;
 margin: 0;
 padding: 0;
 height: 100%;
}

div.header_capitol{
    height: 149px;
    background: url(/img/Capitol_unselected.png) fixed no-repeat;
}

div.header_capitol_selected{
    height: 149px;
    background: url(/img/Capitol_Selected.png) fixed no-repeat;
}

div.content{
    position: relative;
    margin-left: 21em;
    margin-top: 7.5em;
    height: 100em;
    width: 100em;
    background: #eee;
    font: 18px;
    .rounded-corners (10px)
}

So now you've more or less seen the css behind this. If you answered "rounded-corners" by the way, give yourself a cigar (or your personal preferential substitute). Other than reusing that code, it more or less looks just like css, huh? Moving on.

An issue to consider is rendering, particularly with the Meteor router add on package. If we create dependencies in the router, it more or less seems to become part of the rendering process. Inferring this might be the case, I had to decide a way to excise rendering dependencies from the router code. So here, if we set something reactive and create dependencies inside our helpers or rendering, we can end up in an infinite loop - if it even works. Instead, we rely at this juncture on autorun. We'll show you how to do that now:

Meteor.Router.add
     #.....snip out code .......
    '/bio/:query': (query) ->
            Session.set("bioquery",query)
            #Session.set("biographic",bio)
            Session.set("federal",true)
            #console.log(bio)
            #externalcall_dep.changed()
            return "bio"

You can easily tell I've been experimenting looking at the comments in this code. In any event, here I set a bioquery, and I will be monitoring this with an autorun. If I elected to run the meteor method which in turn calls out to the Sunlight api, I can end up in an infinite loop right; assuming I am correct about dependencies rerendering - because the call comes later, so when the results return just milleseconds later, the router gets triggered, we get rerendering but the same meteor method would get called in the process and we would get the same data fetched and asynchronously injected again. Bad news. Instead of letting this happen, we have an autorun block monitor for changes in what we want to query and insert this in the aforementioned local collection. This enables us to take the calls out of the rendering process and set a reactive source (the localbiocollection cursor) to a value to update rendering.

Meteor.autorun ->
    bioquery = Session.get("bioquery")
    if bioquery
        bioquery_result = Meteor.call("fetchBiographic", bioquery,
                                (err,res) ->
                                    localbiocollection.insert(res)
                            )

Now we get the dependencies updated properly. Fantastisch!!