Announcement

Collapse
No announcement yet.

Baracuda + Micro

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    @ODM, Nice explanation, appreciated...

    In my tests I was seeing slight variations in an otherwise perfectly constant pulse widths (random pulses were slightly longer).
    It all depended on how I coded the delay loops, some code would produce glitches and some would not...

    Having a scope connected has proved a useful tool...

    Mike

    Comment


    • #62
      Try this code without interrupts.

      PHP Code:
      #include <TimerOne.h>

      #define CYCLE_TIME 1662 // 0.0015625 Seconds @ 640 PPS // added 100 to 1562 to make is closer to 640pps //

      #define TX_PULSE         100 //(100µs)        100µs
      #define PULSE_1_DELAY     20 //( 20µs)        Delay before Sample Pulse 1
      #define PULSE_1           45 //( 45µs)        Sample 1 Pulse Duration
      #define PULSE_2_DELAY    100 //( 100µs)        Delay between Sample Pulse 1 and Pulse 2
      #define PULSE_2           45 //( 45µs)        Sample 2 Pulse Duration 45µs
      /*
       Total Pulses Time        310
       -----------------------------
       Cycle Time              1562
       Pulse Times             -310
       Delay till next cycle   1252

       Actual cycle time is 1/1562 µs = ~640
      */


      void setup()
      {
        
      cli();                              // This code needs disabled interrupts.
        
      pinMode (A0OUTPUT);
        
      pinMode (A1OUTPUT);  
        
      pinMode (A2OUTPUT);

        
      digitalWrite (A0LOW);
        
      digitalWrite (A1LOW);
        
      digitalWrite (A2LOW);  

        
      Timer1.initialize(CYCLE_TIME);  // Configures and starts Timer1
        
      TIMSK1 _BV(TOIE1);            // Sets the timer overflow interrupt enable bit
      }

      void loop()
      {
        while (
      TIFR1 _BV(TOV1) == 0) {;}    // Waits for TOV1 interrupt flag.
        
        // Begin cycle
        
      PINC 0x01;
        
      delayMicroseconds(TX_PULSE);
        
      PINC 0x01;
        
      delayMicroseconds(PULSE_1_DELAY);
        
      PINC 0x02;
        
      delayMicroseconds(PULSE_1) ;
        
      PINC 0x02;
        
      delayMicroseconds(PULSE_2_DELAY);
        
      PINC 0x04;
        
      delayMicroseconds(PULSE_2);
        
      PINC 0x04;  
        
      // End cycle

       /* .... signal processing, display etc here ... */

        
      TIFR1 _BV(TOV1);   // Clears TOV1 interrupt flag.

      Comment


      • #63
        Originally posted by Teleno View Post
        Try this code without interrupts.
        That looks very interesting indeed...

        Comment


        • #64
          Originally posted by Michaelo View Post
          That looks very interesting indeed...
          Another one with some assembly thrown in.

          PHP Code:
          #include <TimerOne.h>

          #define CYCLE_TIME 1662 // 0.0015625 Seconds @ 640 PPS // added 100 to 1562 to make is closer to 640pps //

          #define TX_PULSE         100 //(100µs)        100µs
          #define PULSE_1_DELAY     20 //( 20µs)        Delay before Sample Pulse 1
          #define PULSE_1           45 //( 45µs)        Sample 1 Pulse Duration
          #define PULSE_2_DELAY    100 //( 100µs)        Delay between Sample Pulse 1 and Pulse 2
          #define PULSE_2           45 //( 45µs)        Sample 2 Pulse Duration 45µs
          /*
           Total Pulses Time        310
           -----------------------------
           Cycle Time              1562
           Pulse Times             -310
           Delay till next cycle   1252

           Actual cycle time is 1/1562 µs = ~640
          */


          void setup()
          {
            
          cli();                              // This code needs disabled interrupts.
            
          pinMode (A0OUTPUT);
            
          pinMode (A1OUTPUT);  
            
          pinMode (A2OUTPUT);

            
          digitalWrite (A0LOW);
            
          digitalWrite (A1LOW);
            
          digitalWrite (A2LOW);  

            
          Timer1.initialize(CYCLE_TIME);  // Configures and starts Timer1
            
          TIMSK1 _BV(TOIE1);            // Sets the timer overflow interrupt enable bit
          }

          void loop()
          {
            
          asm volatile(                   // Waits for TOV1 interrupt flag.
              
          "wait:           \n"
              "sbis %0, %1 \n"      
          // skip next instruction if TOV1 bit set in TIFR1
              
          "rjmp wait       \n"
              
          : : "I" (_SFR_IO_ADDR(TIFR1)), "I" (TOV1)
            );
            
            
          // Begin cycle
            
          PINC 0x01;
            
          delayMicroseconds(TX_PULSE);
            
          PINC 0x01;
            
          delayMicroseconds(PULSE_1_DELAY);
            
          PINC 0x02;
            
          delayMicroseconds(PULSE_1) ;
            
          PINC 0x02;
            
          delayMicroseconds(PULSE_2_DELAY);
            
          PINC 0x04;
            
          delayMicroseconds(PULSE_2);
            
          PINC 0x04;  
            
          // End cycle

           /* .... signal processing, display etc here ... */

            
          TIFR << TOV1;   // Clears TOV1 interrupt flag.


          Comment


          • #65
            Originally posted by Teleno View Post
            Another one with some assembly thrown in.
            I'd stick to your first suggestion for the loop version, in the scheme of things saving a few uS with assembly when we already have a 350uS loop doesn't give us all that much gain... I also thought about switching the pins with assembly code as I did in one of the examples but it dawned on me... just reducing one of the delays between pulse a few uS would probably gain more time...

            I guess it all depends on the remaining code but whatever happens in the remaining loop, it must execute within off period (~1300uS), otherwise we delay the pulses... I suppose it not that critical but as we can, we should...

            Mike

            Comment


            • #66
              Teleno, the kind of jitter Davor was worried about was on the degree of 1-2 clock cycles. For a 20MHz clock, each cycle would be 50ns and therefore max jitter would be 100ns. However it is "somewhat" random in nature and can be eliminated if it ever poses to be a problem, so I think this jitter thing got a bit out of hand

              But polling would take several instructions and likely response times on the order of 4-5 (sbrc, rjmp, rjmp) - it's more straightforward to use the timer interrupt if that 50-100ns jitter is of no consequence, all critical timing can go in that timer interrupt and then housekeeping, display updates, button polling etc. in the main loop. Just use the previous timer interrupt to calculate values for the next one so there will be no ambiguous code run length in the space between timer interrupt trigger and IO update. If there's switch cases etc. it'll be hard to predict actual run time.

              Comment


              • #67
                Originally posted by ODM View Post
                But polling would take several instructions and likely response times on the order of 4-5 (sbrc, rjmp, rjmp) - it's more straightforward to use the timer interrupt if that 50-100ns jitter is of no consequence

                SBIS - Clock cycles:
                1 if condition is false (no skip)
                2 if condition is true (skip is executed) and the instruction skipped is 1 word
                3 if condition is true (skip is executed) and the instruction skipped is 2 words

                RJMP is 2 words long, this implies a latency of only 3 clock cycles in the waiting loop (188ns at 16MHz). The next instruction is already useful code.

                If you can afford it, polling it a better approach because the time to the first useful instruction is shortened. With polling, the useful code starts immediately after the condition is detected. In contrast, an ISR implies stack manipulations. These are especially costly in C because not only the PC and status registers get pushed, but a lot of other registers as well (r0 and r1 as a minimum in avr-gcc, see this example of assembly code generated: ).

                Think for example a frequency meter, polling allows measuring higher frequencies than what would be achievable by interrupts.

                See this AVR application note on page 13: http://web.csulb.edu/~hill/ee346/Lec...P%20Timers.pdf

                Comment


                • #68
                  If you reserve some of the AVR's 32 work registers for interrupt use only, the context shift is very short. But all in all, if most of the code it performs is known and it can update one character per cycle or something, there's no reason to run by interrupt save for safer handling of TX IO or other toggles that can burn stuff.

                  That "application note" is actually someone's lecture and seems more like an example of how to start a timer and then read it, than efficient use. It's a fine and dandy way of making a delay precise to a few clocks, if the processor is all right with not doing much in the mean time. Setting up the timer, then doing useful stuff and finally polling until the timer reaches its limit, requires checking that any and all code run after the timer setup and before the polling loop fits into that time space, and doesn't stop execution like some carelessly written code can do.

                  Also its very nice to handle convenient large-block things such as LCD update, curve fitting, and other things besides just toggling pins on and off, in a single function thats easy to rewrite or use as it is for other projects, that doesn't have to be broken apart into sections to make for kooky, cycle-sensitive, horrible-to-maintain code is anyone's own choice. Bottomline is, I wouldn't do it for a personal project, certainly not for a hobby project meant to share with others, and if I did it in a company project I'd probably get a few good laughs in the coffee table and be fired afterwards.

                  Comment


                  • #69
                    Originally posted by ODM View Post
                    Also its very nice to handle convenient large-block things such as LCD update, curve fitting, and other things besides just toggling pins on and off, in a single function thats easy to rewrite or use as it is for other projects, that doesn't have to be broken apart into sections to make for kooky, cycle-sensitive, horrible-to-maintain code is anyone's own choice. Bottomline is, I wouldn't do it for a personal project, certainly not for a hobby project meant to share with others, and if I did it in a company project I'd probably get a few good laughs in the coffee table and be fired afterwards.
                    You like to make assumptions, don't you? Nowhere I recommended inlining the DSP and display code in a single finction, as you pretend. The "... " placeholder indicates where, in the timeline, such processes would take place.

                    You also seem not to have read my disclamer "A PI does not need to be computation-intensive application." We're talking about an embedded controller for Baracuda here, not banking software. The idea is a controller whose total process time fits loosely inside a single PI cycle > 1ms. Your "curve fitting" argument is misplaced.

                    You give the impression of being stubbornly defending a lofty point of view that's oblivious to the actual needs of the project.

                    By the way, I've looked but didn't see your excelling code. I'm sure you're working on it.

                    Comment


                    • #70
                      I'd thought it nice to write an user interface that doesn't need to be done one character at a time between toggles since a LCD is part of the project. There would also be no need to stop running the detector for some cycles to process data, especially since there's just a few k of memory anyway to sample into, and any digital filters etc. are going to eat up a good few cycles on an 8bit cpu without multiply-accumulate instructions, just an 8x8 multiplier. If these are OK tradeoffs, that's all fine! I'm just saying that a simple application shouldn't be used as an excuse to learn bad programming habits that can eventually burn ones fingers, even with a simple project like this one.

                      Stubbornness is to be expected. We engineers are a stubborn breed, particularly in our home field ... when someone insists that since they are a professional of another field, they're allowed to hammer nails into the wall with their forehead. Optimization is to make things as simple as they can be, but not more simple than they should be. Like that jitter part - if it's of no consequence then it can just be ignored, but there are other very good reasons to use interrupt run timing, like being able to just write an user interface instead of having to uphill struggle in fitting it between the detector timing. And the big bonus is being able to write the detector timing separate from the user interface, so it can be independently tweaked as well.
                      I think we should sit down with a lot of beer, and a couple sketchpads, and we would perhaps understand each other better!

                      Comment


                      • #71
                        Originally posted by ODM View Post
                        I'm just saying that a simple application shouldn't be used as an excuse to learn bad programming habits that can eventually burn ones fingers, even with a simple project like this one.

                        Stubbornness is to be expected. We engineers are a stubborn breed, particularly in our home field ... when someone insists that since they are a professional of another field, they're allowed to hammer nails into the wall with their forehead. Optimization is to make things as simple as they can be, but not more simple than they should be. Like that jitter part - if it's of no consequence then it can just be ignored, but there are other very good reasons to use interrupt run timing, like being able to just write an user interface instead of having to uphill struggle in fitting it between the detector timing. And the big bonus is being able to write the detector timing separate from the user interface, so it can be independently tweaked as well.
                        I think we should sit down with a beer and a couple sketchpads, and we would perhaps understand each other better!
                        You're accusing others of crappy programming without a whiff of evidence and yet incapable of proposing a single line of improved code.

                        You're not the only engineer here and I'm afraid there's little evidence of your pretended "superior" skills.

                        Originally posted by ODM View Post
                        I'd thought it nice to write an user interface that doesn't need to be done one character at a time between toggles since a LCD is part of the project. There would also be no need to stop running the detector for some cycles to process data, especially since there's just a few k of memory anyway to sample into, and any digital filters etc. are going to eat up a good few cycles on an 8bit cpu without multiply-accumulate instructions, just an 8x8 multiplier. If these are OK tradeoffs, that's all fine!
                        A 16x2 LCD takes 39-43uS to place a character or execute a command except for clearing display and to seek cursor to home position which takes about 1.64ms. Even if you send only one character per PI cycle the refresh rate would be around 30Hz. That's 10x faster than the user would need. This is not a real time imaging application, it's just a parameter display.

                        Comment


                        • #72
                          Originally posted by Teleno View Post
                          You're accusing others of crappy programming without a whiff of evidence and yet incapable of proposing a single line of improved code.
                          You're not the only engineer here and I'm afraid there's little evidence of your pretended "superior" skills...
                          So far this topic went in most positive direction! I am glad to see few very conversant and skilled experts here! Lot of to learn from you guys! Thanks in advance!
                          But please calm down and don't spoil the good spirit.
                          Egos aside. Personal ego is most irrelevant thing in life, trust me!
                          Just keep on like you did so far!
                          Best regards!

                          P.S.
                          I am following this with greatest attention. But i will not mix and annoy all of you because my knowledge in these matters is next to zero. Still learning.

                          Comment


                          • #73
                            Originally posted by Teleno View Post
                            ...
                            A 16x2 LCD takes 39-43uS to place a character or execute a command except for clearing display and to seek cursor to home position which takes about 1.64ms. Even if you send only one character per PI cycle the refresh rate would be around 30Hz. That's 10x faster than the user would need. This is not a real time imaging application, it's just a parameter display.

                            But if you implement "bar scale" to display rise/fall of amplitude at detection (or whatever else); real time is desirable. Such bar scale is perfect for pinpointing.

                            Comment


                            • #74
                              I don't mean to sound offensive but frankly that is pretty crappy programming. Let's take an example; for LCD use it's rather nice to use a ready-made library (such as http://community.atmel.com/projects/hd44780-library ) and focus efforts on the actual detector implementation. Programming is not necessarily as much about writing code to bit-bang displays and toggle IOs at the same time, as it is about avoiding writing all those lines with a library like that, for example.

                              Using this kind of library which uses no hardware timers but counter based delay loops, the cpu's practically wasting time in those necessary delays to run a hd44780 display, just decrementing and jumping. If we want to do similar counter-delay based debouncing for switches, that's all right too. All these delays will be skipped by a timer ovf interrupt, and not hold back the actual detector timing itself. The detector can run however it wants regardless of how we time the LCD, button debouncing, and bleepy beeps for the speaker.

                              We can also write a cute little menu system that just jumps from one menu loop to another in order to update different parameters for ground testing. If we want to add some co-dependency into parameters, even complex kind, it's a breeze compared to having to think where it'll be squeezed or whether those parameters will be saved in some temp registers while waiting for the next pause in processing. With a library like that, we can just tell the thing to write a string of text on the screen. How's that for convenience?

                              Look, the point I'm trying to make here is that it is better to learn some good practices early. It makes reusing other people's code, and your own, much easier (or in this case possible at all). If after all this it still sounds like a damn good idea to be writing a cycle critical timing system that intersperses one character at a time display updates on the LCD and handling any kind of user interface together with the odd chance of getting a mosfet-burning halt, I'm frankly at a loss of words. And that burning mosfet won't be an example of how unreliable microcontrollers are, it'll be an example of how poor programming burns fingers.

                              Yep. I'll write something up this week/end, if I have the time, but frankly it's not really motivating - If the above doesn't sound convincing, a decent example on why people are generally taught to do things the way they are done, I'm not sure what it takes.

                              If everything that thing ever needs to do is just put out some pulses, read an ADC, and write a few numbers on the screen, it'll be faster to just write a chain of delay_us(x) and port=stuff. But if there is a menu that spells out delay names and numbers, and does button toggled stuff, it'll be slower to implement and a pain in the *** to decide to change anything, _anything_, later on - let alone debug the thing. The whole point of replacing a shoddy RC timer with a shoddy program eludes the point of adding a microcontroller.

                              Still that offer for beers is open if you (or another geotech-ite) ever end up here on the cold side of hell. The soil's still frozen.

                              Comment


                              • #75
                                ... and as soon as it thaws, cohorts oh mosquitoes fly out...

                                Comment

                                Working...
                                X