How to write UT for modifyRequestBodyFilter - spring-cloud

I had write my filters with the ModifyRequestBodyGatewayFilterFactory. They works well in system test.
But I want to write unit test for the filters.
I had try to find example in blow folders.
But I can't find a example for get request body to check.
Does some one can give me a example?


find all _stringbetween occurrences within a webpage

I am working on an AutoIT script to find select messages withing a chatroom type webpage, I have no problem with placing the sent text between two special characters to make them easier to find, also to filter out all of the unwanted stuff. the problem that I am having is once the _stringbetween finds what it is looking for it doesn't continue looking. For testing I have the values returning in a GUI box. If there is a way to return all text between "^","^" it would solve my problem. Here is what i have so far
$html = _StringBetween(_INetGetSource(''), '^','^')
MsgBox(0, "title", $html[0])
also if anyone knows of a better way to pull select messages from campfire that would also solve my problem, maybe using the star feature... If you would like to look at the source code and api's of campfire they are available on GitHub
The _stringbetween return all occurences it will find.
You can make this simple test.
#include <array.au3>
For your source I don't now what's happening but test like this :
#include <array.au3>
$Source = BinaryToString(InetRead(""))
If the problem persist make a paste of your source code's page and post the link.

In Go when using the Example… testing method is there a way to have it show a diff instead of got… want…?

I've been using go for a bigger project and love it, and for my testing i've been using the
func ExampleXxx {
... code ...
//...expected output ...
method for testing. When it fails it will say
... bunch of lines showing the output of test ...
... the comment you put in to show what you expected ...
is there any way to make it show just the difference? I can take the two and copy to separate files and run a diff etc, but I'd much rather just have it show the parts that were wrong as some of my tests have longer output.
Thanks in advance
I'm using and want the output to show a diff not the current output. I know I can do the diff manually.
No, you cannot do this. This is not the intended use of Examples.
Examples are a nice way to show how some function will behave: Examples exists to document. The main reason for validating the example output is to make sure the Examples are valid/correct, not that your code is okay. For the later you have Test functions
Most often the output of an Example displays input and output (or just output) of one invocation of a certain function/method per line; sometimes Examples use the different lines to show parts of a complex result, e.g. one line per element of the returned slice.
I think your use of Examples to "verify the flow of my program" contradicts the intention of Examples. I would use Test functions and use any of the available diff tools to generate a got, want, diff output myself if I'd like to test e.g. a text processor on large bunches of input.
If I understand your question correctly, it sounds like GoConvey would do the trick... It's a TDD tool that runs in the browser, and it will show you colored diffs for most failures:
You can use it with your existing tests; if you don't want to convert to the GoConvey DSL, that's okay. You don't have to be using a TDD workflow per-se in order for it to work, but if you can run go test, this tool should be able to pick it up. (I've never used the Example functions for testing... I'm not sure what you mean by that, honestly.)
(There's a new web UI in the works that will still show the diff. See below.)
These are contrived examples, but obviously the diff is more useful with longer output.
Is that kind of what you're looking for?
From the style guide:
if got != tt.want {
t.Errorf("Foo(%q) = %d; want %d",, got, tt.want) // or Fatalf, if test can't test anything more past this point
This will put out only errors of course. There is nothing built into go that lets you show the diff. I think just piping this to whatever diff tool you are using against your last output would still be best.
Go is great, but no reason to re-invent tools that already do fantastic jobs that already exists at the terminal.

Error handling and unit tests and code coverage

I'm creating a bunch of libraries for personal use and I'm inserting custom error handing into my code.
I put these error message in lots of places (well everywhere an error may occur, or something unpredicted happens).
I'm now creating test classes for my libraries (well actually I create the test classes as I go along, but....)
I've been doing some reading on code coverage etc, and I have a question about my process (I want to get into good habits).
As mentioned above my methods do a lot of error handling.
In my tests I create 2 tests
Success : tests for the expected return item (value, object etc).
Failure : pass in 'bad' stuff, and checks that I get my error message.
This seems like a valid method for testing my code, but the more I read the less sure I am.
Any advice on how to improve my test is welcome (or pointers to resources on the net).
Thanks in advance,
and sorry if this seems like a 'stupid' question (which it does to me.... a litle bit)
This sounds like a good way of testing for me. If you test each possible scenario and get every outcome of every method covered, what more is there that you could do?
Make sure to test a successful run through, and then look at your code and see everything that can go wrong. Test that it fails in the way that you expect it to, and throws the exception expected etc. So, instead of just writing two tests like you currently have, write one for each possible failure, along with a few to check that it passes when expected. This may mean writing a lot of tests.
For instance, if you have code with something like this:
if (x && y && z) {
} else {
then it might be a good idea to test what happens if x and y but not z occur, if x occurs alone, if y and z occur etc. This may seem petty, but it is a good idea to cover as many possible scenarios as possible.
In terms of getting into good habits, the best way to write tests is to write them as you go along. So, write a test, write the code to pass the test, repeat. This means that all code written adds value and helps you break a problem down into smaller chunks. This is known as TDD (Test Driven Development). There are plenty of places to read up on TDD online, including sites such as
Hope this answer helps. If you need any extra explanation on anything I have said, let me know.

getResources for ModX Evolution?

Does anyone know if getResources works for ModX Evolution? I've been trying to get it working for a while now with no success.
If there is no way to get it working, does anyone know of an equivalent way to get multiple resources to show on the one page (with their templates as well)?
Many thanks
[[!getResources? &parents=`58` &sortdir=`ASC` &sortby=`menuindex` &limit=`100` &includeTVs=`1` &processTVs=`1` &tpl=`eventtemp` ]]
You should be able to replicate this using Ditto in Evolution - the snippet call would be something like:
[!Ditto? &parents=`58` &orderBy=`menuindex ASC` &display=`100` &tpl=`eventtemp`!]
100 is quite a lot of resources to list on one page though. The query might be a bit slow, are you sure you want to do that?
Try something like this, it worked fine for me
a link text

Unit tests for HTML Output?

This may be a dumb question, but do you make unit tests for the HTML output of your PHP functions/scripts?
I try to keep my HTML and my PHP separate - i.e. HTML includes with placeholders, and functions for certain recurring elements (tabular data / any sort of looped output) - but I'm not sure how to go about verifying this.
Is there a standard way to go about such things, or is it mainly a matter of using regular unit tests on functions which create the inserted content, and then making sure it looks correct in the browser/W3C Validator?
Edit: I guess a corollary to this would be: are these sorts of unit tests even worth having? If you're keeping your content and structure properly separated, then you would really only be testing a handful of includes in very limited scenarios (presumably, anyway). Is it really worth it to semi-hand-craft full pages just to have a file to compare to?
Based on my experience in testing HTML, I now follow these three basic rules:
1. Don't test HTML output against a correct template.
You will modify the outputted HTML too often, and you'll end up wasting time maintaining your tests.
2. Check for the existence of important data in generated HTML.
If you're generating HTML (as opposed to static HTML that you're written once), test the generated HTML for important data. For instance: If you're generating a table based on a two dimensional array, check that the values in the array are found somewhere in the generated HTML. Don't bother to validate the complete output, as this would violate #1.
3. Validate if output is proper HTML.
Validate all output for correct HTML in order to avoid stupid mistakes, like missing end tags. I've written a library for this, which can be used absolutely free.
This PHP library will let you validate whether a string is valid HTML5. The library uses Compatible with PHPUnit or any other testing framework.
Download and documentation here.
Easy to use, example:
$validator=new HTML5Validate();
// Validate (returns TRUE or FALSE)
$result=$validator->Assert('<p>Hello World</p>');
// Get explanation of what's wrong (if validation failed)
print $validator->message;
Testing for HTML output would be considered a coverage test. Initially, when I started using PHP I was creating these tests, but over time I found that these tests weren't really all that helpful.
If there is one thing that I know, it is that the presentation is going to change a lot from initial development to deployment.
If you think about it, a for loop really is not logic but is a isometric transformation function, and if you follow Separation of Concerns, Then you are passing the data into the for loop via a method of some sort. I would recommend testing that the for loop gets the correct data, but not the output of the for loop.
If you find yourself repeating yourself in generating tables then by all means start unit testing those table templates. But once again, you'll find that those templates will be seeing a lot of change.
At this point you should be looking at separating the iteration from the HTML output to help isolate yourself from these concerns in your tests.
One way to do this is to use a mapping function, it will take a list and transformation function and perform the function on each item in the list, then return the transformed list.
Usually, when creating tables, I end up with two for loops in creating a row.
Iterate over all rows.
While in (1) iterate over items in row.
Pretty ugly to unit test that, but with closures you can create function generators that would really be easy [this is said with a grain of salt] to implement.
You can use PHPUnit. It has Output testing.
I found the SimpleTest framework to be very useful, usually i use it for integration-tests and PhpUnit for unit-tests. They spare me a lot of manually submitted formulars, which i would do otherwise over and over again.
It became my habit to follow this points, when doing such integrations tests:
Try not to repeat tests that are already done with real unit-tests. If for example you have a unit-tested validating function for email addresses, it doesn't make sense to submit all kind of invalid email addresses. Only check once if you are redirected with an error message.
Do not compare the resulting HTML with a complete reference output, you would have to update your tests with every redesign of your pages. Instead check only crucial parts with $webTestCase->assertText('...'); or $webTestCase->assertPattern('/.../');.
With some tiny helper functions, you can gain a lot of robustness. The following function will open a page and checks if the page was opened successfully without warnings. Since there is no compiler for PHP that can give out warnings at design time, you can at least make sure that your code will not produce errors or warnings.
public static function openPageWithNoWarnings($webTestCase, $page, $landingPage = null)
// check that page can be opened successfully
// check that there are no PHP warnings
$webTestCase->assertNoPattern('/(warning:|error:)/i', 'PHP error or warning on page!');
// check if landed on expected page (maybe a redirect)
if (!empty($landingPage))
$url = $webTestCase->getUrl();
$file = basename(parse_url($url, PHP_URL_PATH));
$webTestCase->assertEqual($page, $file,
sprintf('Expected page "%s", got page "%s".', page, $file));
Such tests will give you not much of work, you can start with very light tests, but they give you instantly feedback if something fails, with only one mouse click.
There is an extension for PHPUnit that does html validation here:
Running into this question myself. I think an approach might be to use something like phpQuery to make your tests less fragile. Instead of testing for exact output, test that there should be an h3 tag ~somewhere~ in the output. If it gets wrapped in a div later because a designer needed to tack on an extra background, or because of some ie6 float bug workaround, then your test still works.
It's not very pure, but still potentially a very useful tool.
In some cases (such as CakePHP Helpers), the purpose of a class or function is to generate consistent HTML for you. In such cases, it's important to test that the expected properties of the generated unit of HTML are correct for given inputs. The question is definitely valid in that context.
PHPUnit provides an assertTag() function for this purpose.
However to echo the others; it's important to stress that unit testing should be done on the smallest possible components of your project, and not entire rendered web pages. There are other tools (Selenium for example) that are designed for ensuring that those individual components are integrated together properly.
One very simple way to do this is with output buffering.
$this->assertEquals( ob_get_clean(), '<p>Expected Output Here</p>');